Using the AHRQ Medical Office Survey on Patient Safety Culture
April 29, 2011
A conference call conducted on April 29, 2011, provided users with an overview of the development and use of the Medical Office Survey on Patient Safety Culture. The following is a transcript of the conference call.
Select to access the slide presentations.
- Joann Sorra, Westat Project Director for the AHRQ Surveys on Patient Safety Culture (SOPS).
- John Hickner, Chairman of Family Medicine, Cleveland Clinic.
- Lyle J. (L.J.) Fagnan, Associate Professor of Family Medicine at Oregon Health & Science University and Director of the Oregon Rural Practice-based Research Network (ORPRN).
- Naomi Dyer, Senior Study Director at Westat.
Joann Sorra: Good afternoon and welcome to our webinar on using the Agency for Healthcare Research and Quality's (AHRQ) Medical Office Survey on Patient Safety Culture. My name is Joann Sorra. I'm the Westat Project Director for the AHRQ Surveys on Patient Safety Culture and I'll be the moderator for today's webinar.
Currently, there are patient safety culture surveys for three health care settings: hospitals, medical offices, and nursing homes. AHRQ released the Hospital Survey in 2004, the Nursing Home Survey in 2008 and the Medical Office Survey in 2009. The completed surveys and toolkit materials can be found on the AHRQ Web site. AHRQ is currently funding the development of a patient safety culture survey for use in retail pharmacies. Today's webinar will focus on the Medical Office Survey, also referred to throughout today's presentation as the Medical Office SOPS.
In addition to me as the moderator, we're really pleased today to welcome three outstanding speakers. Joining us from Cleveland, Ohio, is Dr. John Hickner, Chairman of Family Medicine at Cleveland Clinic. Joining us from Portland, Oregon, is Dr. L.J. Fagnan, Associate Professor of Family Medicine at Oregon Health and Science University and Director of the Oregon Rural Practice-based Research Network, ORPRN. And here in Rockville, Maryland, is Dr. Naomi Dyer, Senior Study Director at Westat.
The agenda for the webinar is outlined on slide 4. We'll start with Dr. John Hickner giving an overview of the development of the Medical Office SOPS. Then Dr. L.J. Fagnan will present information about a large-scale data collection conducted with the Practice-Based Research Networks or PBRNs, along with valuable lessons learned. Next we'll hear from Dr. Naomi Dyer, who will present preliminary comparative results on the survey. She will also share results on how patient safety culture perceptions differ between physicians and medical office staff and by medical office characteristics.
Finally, I will share information about an upcoming comparative database for the survey and then we will end with a brief question-and-answer session. Now, on with our program. Our first speaker will be Dr. John Hickner, Chairman of Family Medicine at Cleveland Clinic, giving an overview of the development of the Medical Office SOPS.
John Hickner: Good afternoon. This is John Hickner. The objective of my presentation will be to describe for you the development of the Medical Office Survey on Patient Safety Culture and to discuss briefly the pilot testing of the survey that we did here in the United States.
As background, recall that the Hospital Survey on Patient Safety Culture was released in November of 2004. Before long, people realized that there was great need to have a survey for office practice as well and the same team at Westat began development then of the Medical Office Survey, which was released in January of 2009.
We went through the same development steps as occurred for the Hospital Survey on Patient Safety Culture with a very scientific approach. We first reviewed the literature and existing surveys, conducting background interviews with medical office physicians and staff. I wanted to interject that the term patient safety was really quite foreign to many people that work in medical offices. They tended to think of the fire extinguisher and tripping on the sidewalk and, of course, we mean a lot more than that. We then identified key areas of safety culture in the medical office setting, developed the survey items, conducted cognitive testing of those items, and obtained input from over two dozen researchers and stakeholders so that we had the right content and formatting. Finally, we pilot tested the survey, analyzed the data, and then made final adjustments to the survey.
The goals of this survey are typical for safety culture surveys. First of all, we hope to raise staff awareness about patient safety by those who complete the survey because one learns a great deal about patient safety as you complete the survey by the nature of the questions. We also want, of course, to assess the current state of patient safety culture in office settings to provide a baseline for future measurement and use these surveys then for internal patient safety quality improvement in primary care and other office practices. We want to evaluate the impact of patient safety and quality improvement initiatives using this as one of the evaluation tools and naturally to track improvements in patient safety culture over time.
The Hospital SOPS has the following 12 dimensions, which are listed in front of you and I'm not going to read each one of those but I will pause for a minute to have you look these over and when we move on to the next slide, you'll notice that some of these dimensions are carried forward but some new dimensions emerge for the medical office culture survey.
We're seeing now the first six dimensions of the Medical Office Survey on Patient Safety Culture and first are dimensions that look different from the Hospital Survey. You see patient safety and quality issues listed under number one, which are more specific to medical office practice. Information exchange with other settings is always a difficulty in outpatient practice and an opportunity for error and the need for safe procedures. Office processes and standardization, of course; work pressure and pace is a big issue in primary care offices especially. Patient tracking and followup, because patients are usually not in our offices, and staff training issues.
These are the other six dimensions of the Medical Office SOPS, which are quite similar to the Hospital Survey on Patient Safety Culture although the questions may be slightly different. I'll pause briefly for you to read these.
We then tested the survey. It was a large pilot test; hard to call a pilot when you have 182 medical offices, 4,174 doctors, other health care providers and staff respondents. So it was a very big pilot and listed below are some of the organizations that we worked with in doing the pilot testing.
These are some characteristics of the offices in which we test the survey: 63 percent were single specialty, 37 percent multi-specialty; and you can read the breakdown of the various types of practices and size of office practices. You can see that most of these practices were a bit larger. Not too many really small office practices; most of these were somewhat larger office practices.
Sixty-nine percent of these practices had only one location but some had several locations, obviously, and most were owned by a hospital or health care system, which is fairly typical now in the United States. Twenty-five percent, however, were owned by physicians or other groups of physician providers; 14 percent were university and academic. Use of electronic tools refers to implementation, for the most part, of electronic health records and this gives a breakdown of how many were fully implemented but as you can see most of these practices did not have full electronic medical records.
The survey was administered to all the doctors and staff in these offices and most used paper forms; only 29 percent Web. The response rate was phenomenal; terrific response rate. You can see those listed as 70 and 78 percent versus 65 percent for Web. The average responses per office, 23; average response rate, 74 percent. So a great response rate.
These are the people who responded broken down in this pie chart. You can see the job descriptions, so we have a pretty good representation of not only the physicians and other providers but also office and support staff, both clerical and clinical.
The data was analyzed, of course, mainly looking at the psychometrics and dropping poor-performing test items. Then the survey was completed and released in 2009, with a Spanish version to be released soon. Note that the Cronbach alpha reliability testing was very good, so basically that means that the dimensions we defined really hang together well.
Joann Sorra: Thank you, John. Now let's transition to Dr. L.J. Fagnan, who will present information about a large-scale data collection conducted with the Practice-Based Research Networks, or PBRNs, along with valuable lessons that he learned.
Lyle J. (L.J.) Fagnan: Thank you, Joann. This is L.J. Fagnan in Portland, Oregon, and I'm going to build on what Dr. Hickner was talking about with 182 pilot sites and we're going to look at this from the perspective of the individual practices and how we used our Practice-Based Research Networks to engage these practices, looking at perceptions of processes by potential use.
We created a consortium of Practice-Based Research Networks out of the 110 primary care Practice-Based Research Networks around the Nation. ORPRN convened 11 networks to survey 311 primary care offices with the Medical Office SOPS. We sent an invitation to the PBRN director saying we want a mix of urban and rural clinics, specialty, health information technology enabled, and ownership variation and we actually developed a Web site for this study and that's still operational.
This is a list of the 11 networks around the country. Most are State specific, some are regional, and one is national but we've covered quite a bit of the United States with these 311 practices.
The goal was to have each network recruit 25 or more practices that Practice-Based Research Networks built on their own experience and expertise in terms of recruiting practices. What ORPRN did was provide a template for letters of invitation, information sheets to offices, and importantly, information for the point of contact at the office with duties that were quite specific.
At the sampling, we wanted to get a wide range of primary care offices. We looked at single specialty, although predominantly these were family medicine practices. We also had pediatrics and internal medicine. We had some multi-specialty, mostly single specialty. We had a mix of small practices with two or three clinicians. Large practices are defined as four clinicians or more. We also looked at whether the practices were health information technology enabled and there are five items here and we considered them enabled if they had three of the five items that were listed there.
We tried to find out how this was working and how these networks actually did engage in the practices. Practices are pretty busy and we wanted to figure out what they did, so the vast majority of networks actually traveled to the offices and delivered the surveys. They followed up with phone calls but face-to-face meetings were the main method of connecting with these practices.
We asked the PBRN coordinators, not the practices, about what works best for you to distribute and collect the surveys in your office. By and large, there is no substitute for face-to-face meetings with the point of contact in the offices. Many of the networks did this around the lunch hour. Food seems to be a great convener and actually worked with the staff to complete it. We got the majority of the staff in these offices at that time and the point of contact then followed up afterward and the PBRN followed up with the point of contact using E-mails and phone calls. But, again, the emphasis here is showing up in person at the office.
We did a survey of the points of contact in these offices. This was an AHRQ task order with a defined deadline, so we had to do this very rapidly after we completed the study. These were the early response rates. We were trying to look at what barriers were encountered in completing this survey. What could we do to improve survey administration and what did the offices think about the value and potential uses of the survey? We had variable response rates with this group of early responders and two-thirds of the respondents were office managers.
Looking at the enthusiasm among clinicians and staff, there was some degree of enthusiasm for two-thirds of the folks. We felt good about this because practices are really busy. They have a lot going on and to get two-thirds of the folks saying, "We're somewhat enthusiastic" or "very enthusiastic" about this was very positive.
Some of the positive comments were, "I was hearing back that they could not wait to get the results back from the survey." "The staff were very enthusiastic when starting the survey, realizing it asks great questions about job satisfaction."
On the other side of the scale, comments were that staff were just really not responsive to filling out surveys. They kind of wondered why they were being surveyed and suspicious of what was going to be done about it. The survey results needed to take into account the fact that this is a snapshot. You're just getting the feelings of the person that day. If they're having a bad day, it might reflect on the survey and then competing priorities play a role. One practice reported, "Look, we're implementing an EHR [electronic health record]. This is not good timing for us and things are kind of stressful."
Do you feel the survey items addressed all areas of patient safety? Most folks felt it was fairly comprehensive. A couple of comments that are worth noting is that the medication error questions were too nonspecific to really provide some quality improvement activities. They wanted more specifics on care coordination. In the qualitative comments, a gap in this survey was really around the areas of access. Access to parking, lighting, access for handicap patients, and extended hours for clinics. This came across in some of the comments.
Westat created reports for each of these 311 offices and these reports went to each PBRN for their group of offices, a 42-page report, and the networks then decided how they were going to distribute these reports. What we did is we took PDFs and E-mailed them to the lead clinician and the point of contact at the office and many of us said we're going to go visit or have visited the office with the results.
So the practices—this is early on—had health meetings; they were planning to do that. Some, about a quarter, were just going to provide written reports only. And some said, you know, we're not going to do anything with it.
Again, this is from the point of contact perspective. Has your office benefited from participating in this survey? They said that obtaining internal data in a safe environment was very beneficial and allowed for honest answers. These are clinician comments here. Interesting to note, areas of concern from the staff perspective opened a dialogue on many issues. One office staff point of contact says, "Doubt that we'll discuss the report; the office manager and physician didn't seem interested in exploring the report." Another physician said, "You know, I have monthly staff meetings and I'm going to break these down into sections and discuss them at each staff meeting."
What feedback have you heard from the medical offices in response to the reports? They thought that the results were interesting; one office manager felt like it was too lengthy and complex. A comment—this is from the PBRN perspective—"I got the feeling that most clinics didn't share the results with their staff, even when the PBRN offered to try to be helpful." There was some confusion around negatively worded questions.
We asked for suggestions about using the report in the medical office and they said we need to explain the results carefully, particularly around the reverse coding items and double negatives. Again, no substitute for going to the practice and talking to the clinicians and staff. The practices felt that the PBRN should provide education and support; otherwise, many offices don't take the time to review the results or share them.
This is my last slide here. I asked about any other comments and here's what I wrote. It says, "The project was much more fun than I anticipated." Using, again, what the PBRNs heard from their practices. "The results we reviewed with the clinics were well received by staff and administration." "The range of responses I heard when implementing this survey was great. Some examples are, 'I'm so glad you asked. Nobody ever asked the front desk for their opinion before.' Another young woman came with her survey in her sealed envelope tightly clutched to her chest. She asked, 'Are you absolutely sure my manager will never see my survey? They won't know it's me, right?' I also heard, 'This is the dumbest thing I've ever done.' That person was very interested in the results once it became apparent that things weren't working as well as she thought they were." Next, I turn it over to Naomi.
Joann Sorra: Thank you, L.J. Now let's transition to Dr. Naomi Dyer, who will present preliminary comparative results on the survey. She will also share results on how patient safety culture perceptions differ between physicians and medical office staff and by medical office characteristics.
Naomi Dyer: Thank you, Joann. This is Naomi and now that we've had a nice background of the development and pilot of the survey as well as the PBRN effort, I'm going to be focusing on two objectives.
First, I'm going to present some of the comparative results from the combined pilot-PBRN database. The full report can be found online at the link shown on your screen. Second, I'm going to present some results that examine the relationships between the Medical Office SOPS scores with staff positions and medical office characteristics.
The combined pilot-PBRN database consists of 470 medical offices with 10,567 respondents. The overall response rate, which is simply the total number of staff responding divided by the total number of staff asked to complete the survey, was 73 percent. The average response rate across the 470 medical offices was 78 percent, with about 22 respondents per medical office.
This figure shows 6 of the 12 patient safety composites ordered from the highest percent positive to the lowest. As you can see here, the top average percent positive responses for teamwork was 82 percent, followed by patient care tracking and followup at 77 percent and organizational learning and overall perceptions of patient safety and quality at 74 percent.
Overall, the three lowest composites were work pressure and pace, which only had 46 percent of positive responses, followed by information exchange with other settings at 54 percent, and office processes and standardization at 59 percent positive. So what we see is some variability across the composites, ranging from an average of 46 percent positive to 82 percent positive.
The survey also had an item asking the respondents to provide an overall rating on patient safety. As can be seen here, an average of 64 percent of respondents rated their medical office as either excellent or very good. As noted, these are just a preview of the results and the full results including breakouts by staff position can be found on the AHRQ Web site.
Switching gears a little bit, we have this really large data set and we wanted to explore the relationships between the patient safety culture scores and staff positions and medical office characteristics.
To do this, we used the 12 patient safety composite scores, but we also created an overall average composite score, which is simply the average across the 12 composites or a summary kind of patient safety culture score. We also created the average rating on quality. One of the items on the survey asked respondents to rate their medical offices on the extent to which their office was patient centered, effective, timely, efficient, and equitable and we took those items and averaged them across to create this average rating on quality, and then we looked at the overall rating on patient safety. All the measures were calculated at the medical office level and we looked at the percent positive response.
So we had five questions we wanted to explore. Are there differences in patient safety culture scores by staff position, by medical office characteristics such as office size, ownership, specialty, and the degree of health information technology implementation?
The first question was, are there differences by staff position? We predicted that out of all the staff positions, the physicians would be the most positive about patient safety culture in their medical offices than the other staff.
There were seven staff positions listed on the survey and the respondents mostly fell into the administrative or clerical, as you can see here with 28 percent. These staff are like the front desk, receptionist, medical records personnel. This is followed by other clinical, and these are technicians and therapists, then your physicians at about 20 percent of the respondents, and then you see RNs [registered nurses], LVNs [licensed vocational nurses], LPNs [licensed practical nurses], management, physician's assistants, etc., make up the rest of the sample.
To look at this, we calculated the average percent positive score by staff position at the medical office level and conducted one-way analysis of variance to see if there were differences across staff positions. When we looked at all seven staff positions, we found out that there were some staff positions that were really similar to each other. Basically, we found that management and physicians were very similar on all of these 15 measures that we looked at and the other staff positions were also very similar. Instead of trying to relay all of the different relationships that existed, we collapsed it down into management and physicians versus all other.
What we found was that management and physicians were more positive than the other staff on 11 of the 15 measures. We see an average difference of 9 percentage points and it ranged from 4 percentage points to 19 percentage points. As we go through these analyses, what I'll show you is this table that presents the results for the three summary scores: the average SOPS composite score, the average rating on quality, and the overall rating on patient safety and then I'll highlight for you any of the major differences. On this table you see here, we see for the average composite score, management and physicians were at 70 percent positive while all others were at 66 percent positive, for a 4 percentage point difference. For average rating on quality, we see an 11 percentage point difference. And for overall rating on patient safety, we see a 5 percentage point difference.
The largest difference we found where management and physicians were more positive than all other staff was for communication openness, which is on the left side of this figure, where management and physicians were 79 percent positive while all other staff were only 60 percent positive. We see a similar pattern for staff training, where management and physicians were 82 percent versus all other staff at only 68 percent positive, for a difference of 14 percentage points. And for communication about error, we see a 9 percentage point difference between the two staff positions.
While they were more positive on 11 of the 15 measures, they were less positive than all other staff on 3 of the 15 measures and we see these three measures here. For information exchange with other settings, the management and physicians were at 45 percent positive while all other staff were at 58 percent positive, for a 13 percentage point difference. For patient care tracking and followup, we see a 12 percentage point difference. Again, all other staff are more positive on this measure and the same thing for patient safety and quality issues, for a 5 percentage point difference. Now, with all of these numbers, if you've done the math, I said that they were more positive on 11 and less positive on 3 of the 15, so there's still one measure out there where they weren't significantly different from each other, and that measure was office processes and standardization.
Our second analysis question was, are there differences in these scores by medical office size? Based on our experience with the Hospital Survey on Patient Safety Culture, we found that smaller hospitals tended to have more positive scores. Therefore, we predicted that smaller medical offices would have more positive patient safety culture scores here.
To examine this question, we looked at the correlations between medical office size and percent positive patient safety culture scores, where size is defined as the total number of providers and staff.We see the range that went from 5, which is because you need at least 5 respondents to be included in the database, to 100.
When categorizing these medical offices into small, medium, and large, we see that over 50 percent fell into the medium office size, which is between 11 and 30 providers and staff, with 31 percent being large offices at 31 or more providers and staff. Nineteen percent were small medical offices.
We found that smaller medical offices did have slightly more positive patient safety culture scores than the larger offices on all 15 measures. We see moderate correlations, with the average correlation of .27, ranging from .14 to .41. Looking at the table, what we see for row one, the average composite score, we see a correlation of .34, which is moderate. What that translates to in percent positive is, small offices on that measure were 74 percent positive while medium offices were 67 percent positive and the large offices fell down to 62 percent positive. Our strongest relationship is actually on this table. It's with average rating on quality and what we see is a .41 correlation, which translates into the small offices being 77 percent positive and the large offices only being 58 percent positive. That's a 19 percentage point difference.
Our third analysis looked at the differences in patient safety culture scores by medical office ownership, where we predicted that physician/provider-owned offices would be more positive than other ownership types.
Looking at the database, we see that most of the medical offices were hospital and health system-owned at 51 percent, followed by provider and/or physician owned and university or medical school owned.
We found that physician/provider-owned offices were more positive than hospital/health system-owned offices on 10 of the 15 measures. This table shows all three different types of ownership. Let's walk through it. For the providers and physicians, we see that on all three measures, they are about 70 percent. When you look at the university/academic owned, and hospital/health systems, they're in the 60s and they're very similar to each other. We actually found that university/ academic and hospital/health systems were similar on almost every single measure that we looked at. The largest difference that we found for the provider/physician and hospital/health system owned was for work pressure and pace, for an 11 percentage point difference, where again the provider/physicians were higher than the hospital/health system owned.
This slide shows the differences between the physician/provider and university/academic offices, where they were different and more positive on 7 of the 15 measures. The two largest differences: Not surprisingly, we see work pressure and pace appear here again, because you see providers and physicians are 54 percent positive and university/academic and hospital/health system are both at 43 percent positive. The next one, patient care tracking and followup, is the only time we see any difference between the university/academic-owned offices and hospital/health system. Not only are there differences between the physician and provider owned at 81 percent but also the hospital/health system and university are significantly different from each other.
Then we looked at specialty. The question was, is there a difference in patient safety culture scores between single- and multi-specialty offices? We predicted that single-specialty offices would be more positive than the multi-specialty offices.
Looking at the database, we see that 58 percent of the medical offices were single-specialty offices, but of the multi-specialty offices, most were multi-specialty with primary care only. To look at this analysis, we performed partial correlation. We did a partial correlation so that we could control for office size between specialty and patient safety culture scores.
We found that single-specialty offices tended to have slightly higher SOPS scores on 6 of the 15 measures. Our average correlation was .13, which is actually kind of on the low range, ranging from .10 to .18. Looking at this table, we see for average SOPS composite score, the correlation was .12. Translating that into average percent positive, we see about a 4 percentage point difference, where single specialty are higher at 68 percent and multi-specialty are lower at 64 percent. We see a similar small relationship with average rating on quality, and when we get to overall rating on patient safety, it's not significant.
For specialty, though, we looked at the largest difference and that was for owner/managing partner and leadership support for patient safety and we see a 6 percentage point difference, where, again, your single specialty is more positive on this than your multi-specialty.
Our final analysis was to see if there was a relationship between health information technology (Health IT) implementation and SOPS scores and we predicted that offices with greater Health IT implementation would have more positive patient safety culture scores than those with less Health IT implementation.
Again, we performed partial correlations so that we could control for office size between Health IT implementation and the Medical Office SOPS scores, where we assess degree of Health IT implementation as 1 equals not implemented and no plans to in the next 12 months all the way to 4, at fully implemented.
There were five Health IT tools that we looked at. Electronic appointment scheduling there on the first row, 81 percent of the medical offices were fully implemented on this tool while only 36 percent of the medical offices were fully implemented on electronic ordering of tests, imaging, and procedures. And then going down to the last row, we see 50 percent were fully implemented on electronic medical records.
What we found actually was that there weren't a lot of relationships with these five tools except for implementation of electronic medical records. So this is showing the relationship for electronic medical record implementation, where they were slightly higher on 11 of the 15 measures. Again, we see a low to moderate average correlation of .15, ranging from .10 to .27. Translating that into percent positive, we look at the average SOPS composite score. We see fully implemented offices were at 67 percent positive and not fully implemented were just at 66 percent positive. So, not a huge difference there when we dichotomize those fully implemented and not fully implemented. And we see average rating on quality was not significant and overall rating on patient safety had about a 3 percentage point difference between the fully implemented and not fully implemented.
Again, looking at the strongest relationship of the largest difference, we note patient safety and quality issues, where the fully implemented offices were at 63 percent positive and the not fully implemented were slightly lower, 5 percentage points lower, at 58 percent positive.
That was a lot to go over. For our conclusions, I'm just going to talk about these five analyses we discussed and let's recap. For staff position, we saw that overall, management and physicians had more positive patient safety culture scores than other staff except on those three where they were less positive. Smaller medical offices had slightly more positive patient safety culture scores than larger ones. Physician/provider-owned offices had more positive patient safety culture scores than hospital/health system-owned and university/academic offices.
For specialty, we found that single-specialty medical offices were slightly more positive than the multi-specialty offices. And for Health IT implementation, overall, it was not strongly related to patient safety culture scores, so offices with greater EMR [electronic medical record] implementation had slightly higher patient safety culture scores.
If you have any questions, these are the two E-mail addresses you can write to, databases on safety culture and safety culture surveys, and we'd be happy to answer them. Thank you very much. And back to you, Joann.
Joann Sorra: Thanks, Naomi. There's just one more slide before we go to the question-and-answer session and this is about the upcoming comparative database for Medical Office Survey. As many hospitals and health systems know, the Hospital Survey has had a comparative database and an annual report has been produced since 2007 and we will be establishing a comparative database on this survey. The database will serve as a central repository for any medical office or system that has administered the survey and is willing to voluntarily submit their data. And this will really be a great resource for comparing results with other medical offices.
Right now, we've simply presented comparative results on 470 medical offices, but once we receive data from the larger Nation in terms of those that have administered the survey, we will hope to expect to see a much larger database. The participating medical offices will receive a free medical office survey feedback report that will compare their results to the latest benchmark. An overall comparative report similar to the hospital report will be produced and available on the AHRQ Web site in 2012.
The data submission will be open September 15th through October 15th. I encourage all of those hospitals, health systems, medical offices that are interested in administering this survey to do so before September and then submit the data and that way we can have a more robust database and better benchmarks. For more information, the AHRQ Web site does now have submission instructions and you can go to the site to see just what you need to do to submit to that database.
Page originally created September 2012