Using the AHRQ Medical Office Survey on Patient Safety Culture
Webinar Transcript
April 29, 2011
A conference call conducted on April 29, 2011, provided users with an overview of the development and use of the Medical Office Survey on Patient Safety Culture. The following is a transcript of the conference call.
Speakers | Presentations | Questions and Answers
Speakers
- Joann Sorra, Westat Project Director for the AHRQ Surveys on Patient Safety Culture (SOPS).
- John Hickner, Chairman of Family Medicine, Cleveland Clinic.
- Lyle J. (L.J.) Fagnan, Associate Professor of Family Medicine at Oregon Health & Science University and Director of the Oregon Rural Practice-based Research Network (ORPRN).
- Naomi Dyer, Senior Study Director at Westat.
Presentations
Joann Sorra: Good afternoon and welcome to our webinar on using the Agency for Healthcare Research and Quality's (AHRQ) Medical Office Survey on Patient Safety Culture. My name is Joann Sorra. I'm the Westat Project Director for the AHRQ Surveys on Patient Safety Culture and I'll be the moderator for today's webinar.
Currently, there are patient safety culture surveys for three health care settings: hospitals, medical offices, and nursing homes. AHRQ released the Hospital Survey in 2004, the Nursing Home Survey in 2008 and the Medical Office Survey in 2009. The completed surveys and toolkit materials can be found on the AHRQ Web site. AHRQ is currently funding the development of a patient safety culture survey for use in retail pharmacies. Today's webinar will focus on the Medical Office Survey, also referred to throughout today's presentation as the Medical Office SOPS.
In addition to me as the moderator, we're really pleased today to welcome three outstanding speakers. Joining us from Cleveland, Ohio, is Dr. John Hickner, Chairman of Family Medicine at Cleveland Clinic. Joining us from Portland, Oregon, is Dr. L.J. Fagnan, Associate Professor of Family Medicine at Oregon Health and Science University and Director of the Oregon Rural Practice-based Research Network, ORPRN. And here in Rockville, Maryland, is Dr. Naomi Dyer, Senior Study Director at Westat.
The agenda for the webinar is outlined on slide 4. We'll start with Dr. John Hickner giving an overview of the development of the Medical Office SOPS. Then Dr. L.J. Fagnan will present information about a large-scale data collection conducted with the Practice-Based Research Networks or PBRNs, along with valuable lessons learned. Next we'll hear from Dr. Naomi Dyer, who will present preliminary comparative results on the survey. She will also share results on how patient safety culture perceptions differ between physicians and medical office staff and by medical office characteristics.
Finally, I will share information about an upcoming comparative database for the survey and then we will end with a brief question-and-answer session. Now, on with our program. Our first speaker will be Dr. John Hickner, Chairman of Family Medicine at Cleveland Clinic, giving an overview of the development of the Medical Office SOPS.
John Hickner: Good afternoon. This is John Hickner. The objective of my presentation will be to describe for you the development of the Medical Office Survey on Patient Safety Culture and to discuss briefly the pilot testing of the survey that we did here in the United States.
As background, recall that the Hospital Survey on Patient Safety Culture was released in November of 2004. Before long, people realized that there was great need to have a survey for office practice as well and the same team at Westat began development then of the Medical Office Survey, which was released in January of 2009.
We went through the same development steps as occurred for the Hospital Survey on Patient Safety Culture with a very scientific approach. We first reviewed the literature and existing surveys, conducting background interviews with medical office physicians and staff. I wanted to interject that the term patient safety was really quite foreign to many people that work in medical offices. They tended to think of the fire extinguisher and tripping on the sidewalk and, of course, we mean a lot more than that. We then identified key areas of safety culture in the medical office setting, developed the survey items, conducted cognitive testing of those items, and obtained input from over two dozen researchers and stakeholders so that we had the right content and formatting. Finally, we pilot tested the survey, analyzed the data, and then made final adjustments to the survey.
The goals of this survey are typical for safety culture surveys. First of all, we hope to raise staff awareness about patient safety by those who complete the survey because one learns a great deal about patient safety as you complete the survey by the nature of the questions. We also want, of course, to assess the current state of patient safety culture in office settings to provide a baseline for future measurement and use these surveys then for internal patient safety quality improvement in primary care and other office practices. We want to evaluate the impact of patient safety and quality improvement initiatives using this as one of the evaluation tools and naturally to track improvements in patient safety culture over time.
The Hospital SOPS has the following 12 dimensions, which are listed in front of you and I'm not going to read each one of those but I will pause for a minute to have you look these over and when we move on to the next slide, you'll notice that some of these dimensions are carried forward but some new dimensions emerge for the medical office culture survey.
We're seeing now the first six dimensions of the Medical Office Survey on Patient Safety Culture and first are dimensions that look different from the Hospital Survey. You see patient safety and quality issues listed under number one, which are more specific to medical office practice. Information exchange with other settings is always a difficulty in outpatient practice and an opportunity for error and the need for safe procedures. Office processes and standardization, of course; work pressure and pace is a big issue in primary care offices especially. Patient tracking and followup, because patients are usually not in our offices, and staff training issues.
These are the other six dimensions of the Medical Office SOPS, which are quite similar to the Hospital Survey on Patient Safety Culture although the questions may be slightly different. I'll pause briefly for you to read these.
We then tested the survey. It was a large pilot test; hard to call a pilot when you have 182 medical offices, 4,174 doctors, other health care providers and staff respondents. So it was a very big pilot and listed below are some of the organizations that we worked with in doing the pilot testing.
These are some characteristics of the offices in which we test the survey: 63 percent were single specialty, 37 percent multi-specialty; and you can read the breakdown of the various types of practices and size of office practices. You can see that most of these practices were a bit larger. Not too many really small office practices; most of these were somewhat larger office practices.
Sixty-nine percent of these practices had only one location but some had several locations, obviously, and most were owned by a hospital or health care system, which is fairly typical now in the United States. Twenty-five percent, however, were owned by physicians or other groups of physician providers; 14 percent were university and academic. Use of electronic tools refers to implementation, for the most part, of electronic health records and this gives a breakdown of how many were fully implemented but as you can see most of these practices did not have full electronic medical records.
The survey was administered to all the doctors and staff in these offices and most used paper forms; only 29 percent Web. The response rate was phenomenal; terrific response rate. You can see those listed as 70 and 78 percent versus 65 percent for Web. The average responses per office, 23; average response rate, 74 percent. So a great response rate.
These are the people who responded broken down in this pie chart. You can see the job descriptions, so we have a pretty good representation of not only the physicians and other providers but also office and support staff, both clerical and clinical.
The data was analyzed, of course, mainly looking at the psychometrics and dropping poor-performing test items. Then the survey was completed and released in 2009, with a Spanish version to be released soon. Note that the Cronbach alpha reliability testing was very good, so basically that means that the dimensions we defined really hang together well.
Joann Sorra: Thank you, John. Now let's transition to Dr. L.J. Fagnan, who will present information about a large-scale data collection conducted with the Practice-Based Research Networks, or PBRNs, along with valuable lessons that he learned.
Lyle J. (L.J.) Fagnan: Thank you, Joann. This is L.J. Fagnan in Portland, Oregon, and I'm going to build on what Dr. Hickner was talking about with 182 pilot sites and we're going to look at this from the perspective of the individual practices and how we used our Practice-Based Research Networks to engage these practices, looking at perceptions of processes by potential use.
We created a consortium of Practice-Based Research Networks out of the 110 primary care Practice-Based Research Networks around the Nation. ORPRN convened 11 networks to survey 311 primary care offices with the Medical Office SOPS. We sent an invitation to the PBRN director saying we want a mix of urban and rural clinics, specialty, health information technology enabled, and ownership variation and we actually developed a Web site for this study and that's still operational.
This is a list of the 11 networks around the country. Most are State specific, some are regional, and one is national but we've covered quite a bit of the United States with these 311 practices.
The goal was to have each network recruit 25 or more practices that Practice-Based Research Networks built on their own experience and expertise in terms of recruiting practices. What ORPRN did was provide a template for letters of invitation, information sheets to offices, and importantly, information for the point of contact at the office with duties that were quite specific.
At the sampling, we wanted to get a wide range of primary care offices. We looked at single specialty, although predominantly these were family medicine practices. We also had pediatrics and internal medicine. We had some multi-specialty, mostly single specialty. We had a mix of small practices with two or three clinicians. Large practices are defined as four clinicians or more. We also looked at whether the practices were health information technology enabled and there are five items here and we considered them enabled if they had three of the five items that were listed there.
We tried to find out how this was working and how these networks actually did engage in the practices. Practices are pretty busy and we wanted to figure out what they did, so the vast majority of networks actually traveled to the offices and delivered the surveys. They followed up with phone calls but face-to-face meetings were the main method of connecting with these practices.
We asked the PBRN coordinators, not the practices, about what works best for you to distribute and collect the surveys in your office. By and large, there is no substitute for face-to-face meetings with the point of contact in the offices. Many of the networks did this around the lunch hour. Food seems to be a great convener and actually worked with the staff to complete it. We got the majority of the staff in these offices at that time and the point of contact then followed up afterward and the PBRN followed up with the point of contact using E-mails and phone calls. But, again, the emphasis here is showing up in person at the office.
We did a survey of the points of contact in these offices. This was an AHRQ task order with a defined deadline, so we had to do this very rapidly after we completed the study. These were the early response rates. We were trying to look at what barriers were encountered in completing this survey. What could we do to improve survey administration and what did the offices think about the value and potential uses of the survey? We had variable response rates with this group of early responders and two-thirds of the respondents were office managers.
Looking at the enthusiasm among clinicians and staff, there was some degree of enthusiasm for two-thirds of the folks. We felt good about this because practices are really busy. They have a lot going on and to get two-thirds of the folks saying, "We're somewhat enthusiastic" or "very enthusiastic" about this was very positive.
Some of the positive comments were, "I was hearing back that they could not wait to get the results back from the survey." "The staff were very enthusiastic when starting the survey, realizing it asks great questions about job satisfaction."
On the other side of the scale, comments were that staff were just really not responsive to filling out surveys. They kind of wondered why they were being surveyed and suspicious of what was going to be done about it. The survey results needed to take into account the fact that this is a snapshot. You're just getting the feelings of the person that day. If they're having a bad day, it might reflect on the survey and then competing priorities play a role. One practice reported, "Look, we're implementing an EHR [electronic health record]. This is not good timing for us and things are kind of stressful."
Do you feel the survey items addressed all areas of patient safety? Most folks felt it was fairly comprehensive. A couple of comments that are worth noting is that the medication error questions were too nonspecific to really provide some quality improvement activities. They wanted more specifics on care coordination. In the qualitative comments, a gap in this survey was really around the areas of access. Access to parking, lighting, access for handicap patients, and extended hours for clinics. This came across in some of the comments.
Westat created reports for each of these 311 offices and these reports went to each PBRN for their group of offices, a 42-page report, and the networks then decided how they were going to distribute these reports. What we did is we took PDFs and E-mailed them to the lead clinician and the point of contact at the office and many of us said we're going to go visit or have visited the office with the results.
So the practices—this is early on—had health meetings; they were planning to do that. Some, about a quarter, were just going to provide written reports only. And some said, you know, we're not going to do anything with it.
Again, this is from the point of contact perspective. Has your office benefited from participating in this survey? They said that obtaining internal data in a safe environment was very beneficial and allowed for honest answers. These are clinician comments here. Interesting to note, areas of concern from the staff perspective opened a dialogue on many issues. One office staff point of contact says, "Doubt that we'll discuss the report; the office manager and physician didn't seem interested in exploring the report." Another physician said, "You know, I have monthly staff meetings and I'm going to break these down into sections and discuss them at each staff meeting."
What feedback have you heard from the medical offices in response to the reports? They thought that the results were interesting; one office manager felt like it was too lengthy and complex. A comment—this is from the PBRN perspective—"I got the feeling that most clinics didn't share the results with their staff, even when the PBRN offered to try to be helpful." There was some confusion around negatively worded questions.
We asked for suggestions about using the report in the medical office and they said we need to explain the results carefully, particularly around the reverse coding items and double negatives. Again, no substitute for going to the practice and talking to the clinicians and staff. The practices felt that the PBRN should provide education and support; otherwise, many offices don't take the time to review the results or share them.
This is my last slide here. I asked about any other comments and here's what I wrote. It says, "The project was much more fun than I anticipated." Using, again, what the PBRNs heard from their practices. "The results we reviewed with the clinics were well received by staff and administration." "The range of responses I heard when implementing this survey was great. Some examples are, 'I'm so glad you asked. Nobody ever asked the front desk for their opinion before.' Another young woman came with her survey in her sealed envelope tightly clutched to her chest. She asked, 'Are you absolutely sure my manager will never see my survey? They won't know it's me, right?' I also heard, 'This is the dumbest thing I've ever done.' That person was very interested in the results once it became apparent that things weren't working as well as she thought they were." Next, I turn it over to Naomi.
Joann Sorra: Thank you, L.J. Now let's transition to Dr. Naomi Dyer, who will present preliminary comparative results on the survey. She will also share results on how patient safety culture perceptions differ between physicians and medical office staff and by medical office characteristics.
Naomi Dyer: Thank you, Joann. This is Naomi and now that we've had a nice background of the development and pilot of the survey as well as the PBRN effort, I'm going to be focusing on two objectives.
First, I'm going to present some of the comparative results from the combined pilot-PBRN database. The full report can be found online at the link shown on your screen. Second, I'm going to present some results that examine the relationships between the Medical Office SOPS scores with staff positions and medical office characteristics.
The combined pilot-PBRN database consists of 470 medical offices with 10,567 respondents. The overall response rate, which is simply the total number of staff responding divided by the total number of staff asked to complete the survey, was 73 percent. The average response rate across the 470 medical offices was 78 percent, with about 22 respondents per medical office.
This figure shows 6 of the 12 patient safety composites ordered from the highest percent positive to the lowest. As you can see here, the top average percent positive responses for teamwork was 82 percent, followed by patient care tracking and followup at 77 percent and organizational learning and overall perceptions of patient safety and quality at 74 percent.
Overall, the three lowest composites were work pressure and pace, which only had 46 percent of positive responses, followed by information exchange with other settings at 54 percent, and office processes and standardization at 59 percent positive. So what we see is some variability across the composites, ranging from an average of 46 percent positive to 82 percent positive.
The survey also had an item asking the respondents to provide an overall rating on patient safety. As can be seen here, an average of 64 percent of respondents rated their medical office as either excellent or very good. As noted, these are just a preview of the results and the full results including breakouts by staff position can be found on the AHRQ Web site.
Switching gears a little bit, we have this really large data set and we wanted to explore the relationships between the patient safety culture scores and staff positions and medical office characteristics.
To do this, we used the 12 patient safety composite scores, but we also created an overall average composite score, which is simply the average across the 12 composites or a summary kind of patient safety culture score. We also created the average rating on quality. One of the items on the survey asked respondents to rate their medical offices on the extent to which their office was patient centered, effective, timely, efficient, and equitable and we took those items and averaged them across to create this average rating on quality, and then we looked at the overall rating on patient safety. All the measures were calculated at the medical office level and we looked at the percent positive response.
So we had five questions we wanted to explore. Are there differences in patient safety culture scores by staff position, by medical office characteristics such as office size, ownership, specialty, and the degree of health information technology implementation?
The first question was, are there differences by staff position? We predicted that out of all the staff positions, the physicians would be the most positive about patient safety culture in their medical offices than the other staff.
There were seven staff positions listed on the survey and the respondents mostly fell into the administrative or clerical, as you can see here with 28 percent. These staff are like the front desk, receptionist, medical records personnel. This is followed by other clinical, and these are technicians and therapists, then your physicians at about 20 percent of the respondents, and then you see RNs [registered nurses], LVNs [licensed vocational nurses], LPNs [licensed practical nurses], management, physician's assistants, etc., make up the rest of the sample.
To look at this, we calculated the average percent positive score by staff position at the medical office level and conducted one-way analysis of variance to see if there were differences across staff positions. When we looked at all seven staff positions, we found out that there were some staff positions that were really similar to each other. Basically, we found that management and physicians were very similar on all of these 15 measures that we looked at and the other staff positions were also very similar. Instead of trying to relay all of the different relationships that existed, we collapsed it down into management and physicians versus all other.
What we found was that management and physicians were more positive than the other staff on 11 of the 15 measures. We see an average difference of 9 percentage points and it ranged from 4 percentage points to 19 percentage points. As we go through these analyses, what I'll show you is this table that presents the results for the three summary scores: the average SOPS composite score, the average rating on quality, and the overall rating on patient safety and then I'll highlight for you any of the major differences. On this table you see here, we see for the average composite score, management and physicians were at 70 percent positive while all others were at 66 percent positive, for a 4 percentage point difference. For average rating on quality, we see an 11 percentage point difference. And for overall rating on patient safety, we see a 5 percentage point difference.
The largest difference we found where management and physicians were more positive than all other staff was for communication openness, which is on the left side of this figure, where management and physicians were 79 percent positive while all other staff were only 60 percent positive. We see a similar pattern for staff training, where management and physicians were 82 percent versus all other staff at only 68 percent positive, for a difference of 14 percentage points. And for communication about error, we see a 9 percentage point difference between the two staff positions.
While they were more positive on 11 of the 15 measures, they were less positive than all other staff on 3 of the 15 measures and we see these three measures here. For information exchange with other settings, the management and physicians were at 45 percent positive while all other staff were at 58 percent positive, for a 13 percentage point difference. For patient care tracking and followup, we see a 12 percentage point difference. Again, all other staff are more positive on this measure and the same thing for patient safety and quality issues, for a 5 percentage point difference. Now, with all of these numbers, if you've done the math, I said that they were more positive on 11 and less positive on 3 of the 15, so there's still one measure out there where they weren't significantly different from each other, and that measure was office processes and standardization.
Our second analysis question was, are there differences in these scores by medical office size? Based on our experience with the Hospital Survey on Patient Safety Culture, we found that smaller hospitals tended to have more positive scores. Therefore, we predicted that smaller medical offices would have more positive patient safety culture scores here.
To examine this question, we looked at the correlations between medical office size and percent positive patient safety culture scores, where size is defined as the total number of providers and staff.We see the range that went from 5, which is because you need at least 5 respondents to be included in the database, to 100.
When categorizing these medical offices into small, medium, and large, we see that over 50 percent fell into the medium office size, which is between 11 and 30 providers and staff, with 31 percent being large offices at 31 or more providers and staff. Nineteen percent were small medical offices.
We found that smaller medical offices did have slightly more positive patient safety culture scores than the larger offices on all 15 measures. We see moderate correlations, with the average correlation of .27, ranging from .14 to .41. Looking at the table, what we see for row one, the average composite score, we see a correlation of .34, which is moderate. What that translates to in percent positive is, small offices on that measure were 74 percent positive while medium offices were 67 percent positive and the large offices fell down to 62 percent positive. Our strongest relationship is actually on this table. It's with average rating on quality and what we see is a .41 correlation, which translates into the small offices being 77 percent positive and the large offices only being 58 percent positive. That's a 19 percentage point difference.
Our third analysis looked at the differences in patient safety culture scores by medical office ownership, where we predicted that physician/provider-owned offices would be more positive than other ownership types.
Looking at the database, we see that most of the medical offices were hospital and health system-owned at 51 percent, followed by provider and/or physician owned and university or medical school owned.
We found that physician/provider-owned offices were more positive than hospital/health system-owned offices on 10 of the 15 measures. This table shows all three different types of ownership. Let's walk through it. For the providers and physicians, we see that on all three measures, they are about 70 percent. When you look at the university/academic owned, and hospital/health systems, they're in the 60s and they're very similar to each other. We actually found that university/ academic and hospital/health systems were similar on almost every single measure that we looked at. The largest difference that we found for the provider/physician and hospital/health system owned was for work pressure and pace, for an 11 percentage point difference, where again the provider/physicians were higher than the hospital/health system owned.
This slide shows the differences between the physician/provider and university/academic offices, where they were different and more positive on 7 of the 15 measures. The two largest differences: Not surprisingly, we see work pressure and pace appear here again, because you see providers and physicians are 54 percent positive and university/academic and hospital/health system are both at 43 percent positive. The next one, patient care tracking and followup, is the only time we see any difference between the university/academic-owned offices and hospital/health system. Not only are there differences between the physician and provider owned at 81 percent but also the hospital/health system and university are significantly different from each other.
Then we looked at specialty. The question was, is there a difference in patient safety culture scores between single- and multi-specialty offices? We predicted that single-specialty offices would be more positive than the multi-specialty offices.
Looking at the database, we see that 58 percent of the medical offices were single-specialty offices, but of the multi-specialty offices, most were multi-specialty with primary care only. To look at this analysis, we performed partial correlation. We did a partial correlation so that we could control for office size between specialty and patient safety culture scores.
We found that single-specialty offices tended to have slightly higher SOPS scores on 6 of the 15 measures. Our average correlation was .13, which is actually kind of on the low range, ranging from .10 to .18. Looking at this table, we see for average SOPS composite score, the correlation was .12. Translating that into average percent positive, we see about a 4 percentage point difference, where single specialty are higher at 68 percent and multi-specialty are lower at 64 percent. We see a similar small relationship with average rating on quality, and when we get to overall rating on patient safety, it's not significant.
For specialty, though, we looked at the largest difference and that was for owner/managing partner and leadership support for patient safety and we see a 6 percentage point difference, where, again, your single specialty is more positive on this than your multi-specialty.
Our final analysis was to see if there was a relationship between health information technology (Health IT) implementation and SOPS scores and we predicted that offices with greater Health IT implementation would have more positive patient safety culture scores than those with less Health IT implementation.
Again, we performed partial correlations so that we could control for office size between Health IT implementation and the Medical Office SOPS scores, where we assess degree of Health IT implementation as 1 equals not implemented and no plans to in the next 12 months all the way to 4, at fully implemented.
There were five Health IT tools that we looked at. Electronic appointment scheduling there on the first row, 81 percent of the medical offices were fully implemented on this tool while only 36 percent of the medical offices were fully implemented on electronic ordering of tests, imaging, and procedures. And then going down to the last row, we see 50 percent were fully implemented on electronic medical records.
What we found actually was that there weren't a lot of relationships with these five tools except for implementation of electronic medical records. So this is showing the relationship for electronic medical record implementation, where they were slightly higher on 11 of the 15 measures. Again, we see a low to moderate average correlation of .15, ranging from .10 to .27. Translating that into percent positive, we look at the average SOPS composite score. We see fully implemented offices were at 67 percent positive and not fully implemented were just at 66 percent positive. So, not a huge difference there when we dichotomize those fully implemented and not fully implemented. And we see average rating on quality was not significant and overall rating on patient safety had about a 3 percentage point difference between the fully implemented and not fully implemented.
Again, looking at the strongest relationship of the largest difference, we note patient safety and quality issues, where the fully implemented offices were at 63 percent positive and the not fully implemented were slightly lower, 5 percentage points lower, at 58 percent positive.
That was a lot to go over. For our conclusions, I'm just going to talk about these five analyses we discussed and let's recap. For staff position, we saw that overall, management and physicians had more positive patient safety culture scores than other staff except on those three where they were less positive. Smaller medical offices had slightly more positive patient safety culture scores than larger ones. Physician/provider-owned offices had more positive patient safety culture scores than hospital/health system-owned and university/academic offices.
For specialty, we found that single-specialty medical offices were slightly more positive than the multi-specialty offices. And for Health IT implementation, overall, it was not strongly related to patient safety culture scores, so offices with greater EMR [electronic medical record] implementation had slightly higher patient safety culture scores.
If you have any questions, these are the two E-mail addresses you can write to, databases on safety culture and safety culture surveys, and we'd be happy to answer them. Thank you very much. And back to you, Joann.
Joann Sorra: Thanks, Naomi. There's just one more slide before we go to the question-and-answer session and this is about the upcoming comparative database for Medical Office Survey. As many hospitals and health systems know, the Hospital Survey has had a comparative database and an annual report has been produced since 2007 and we will be establishing a comparative database on this survey. The database will serve as a central repository for any medical office or system that has administered the survey and is willing to voluntarily submit their data. And this will really be a great resource for comparing results with other medical offices.
Right now, we've simply presented comparative results on 470 medical offices, but once we receive data from the larger Nation in terms of those that have administered the survey, we will hope to expect to see a much larger database. The participating medical offices will receive a free medical office survey feedback report that will compare their results to the latest benchmark. An overall comparative report similar to the hospital report will be produced and available on the AHRQ Web site in 2012.
The data submission will be open September 15th through October 15th. I encourage all of those hospitals, health systems, medical offices that are interested in administering this survey to do so before September and then submit the data and that way we can have a more robust database and better benchmarks. For more information, the AHRQ Web site does now have submission instructions and you can go to the site to see just what you need to do to submit to that database.
Questions and Answers
Joann Sorra: Now we'll begin the question-and-answer session.
For your information, after the webinar, you will be able to access a replay of today's webinar, an audio recording, a written transcript, and the presenters' slides.
I'd like to turn to the audience's questions now. Again, please go ahead and submit through the online form. And our first question here, I'm going to direct to you, John and L.J. Why were you expecting differences between job positions if all job positions are assessing the same safety culture items and the same working places at the same time? I think they're curious about why we did this analysis expecting differences in the responses across the different types of staff positions.
John Hickner: I think the best answer is the elephant analogy, which is that people have different views of the elephant and they see things from different perspectives. Certainly, in the offices that I've worked in I can see differences in attitudes and perspectives between the receptionists, the nurses, and the physicians, so I think it's not surprising that we would see this. Ideally, if there is a lot of open discussion in offices about the specific issues that are measured on the safety culture survey, over the course of time people will come to more of a common view. It is a good sign, however, I think, that teamwork was pretty uniformly rated highly in all categories. And I think that's a good start.
Joann Sorra: L.J., you have anything to add?
Lyle J. (L.J.) Fagnan: Just to kind of parallel what John is saying, the physician leadership and managers oftentimes have a little rosier picture and are not as connected to what is going on. They're not sitting in the lunchroom hearing about what the real issues are. So I think it is some indication of not having your finger on the pulse and I think those practices that schedule regular times to sit down and tell stories and reflect on things where there is a more horizontal relationship are going to have better positive scores. There's lots of things to think about here and we hope to look at that in more detail.
Joann Sorra: Thank you. Our next question is again for you, John and L.J., about surveying practices staffed by residents and faculty who see patients on a part-time basis. Would you recommend surveying those residents and faculty?
John Hickner: Yes, I certainly would, even though they may be there a third of the time or half of the time. I think their opinion is important, so I would include them.
Joann Sorra: L.J., how about you?
Lyle J. (L.J.) Fagnan: I would agree with that. I think that we are seeing increasing numbers of those types of practices and this survey is quite valuable for those practices and meaningful.
Joann Sorra: Thank you. Our next question is, can you talk a little bit about the manpower hours needed to complete the process, the survey from start to finish and also about the costs? I don't know if, L.J., you have a sense of this from the 300 medical offices that you helped administer the survey.
Lyle J. (L.J.) Fagnan: We did this in 11 networks and there are probably 11 different responses to this. It does take a fair amount of time. We had support to do this. Our sense is that it takes some outside facilitation to engage the practices both in administering the survey and being there face to face, getting the surveys done. Then, it's nice to get the results back, but in order to get practices to reflect on those and if you are working with a set of practices, you actually need to go sit down with them because otherwise it is going to sit in a stack on their desk and not get looked at. So I think it does take a fair amount of time. I think it's only going to be as much value as the effort and time that you put into it and I do not have any economic data of saying this is how many dollars it would take and what staff FTE it would take to do that.
Joann Sorra: John, any experiences from you with Cleveland Clinic?
John Hickner: Clearly, it takes effort. Consider that those who complete the survey spend between 10 and 20 minutes to complete it, so that's their full obligation. The time involved is the one that is doing the administration of the survey. We found when we did it at Cleveland Clinic that having one person responsible in each individual practice was sufficient to get the work done to pass out the questionnaires and pick them up in a confidential fashion; we used sealed envelopes. And then there are the data entry issues, so if you do it with a paper form, somebody has to enter the data into a database. AHRQ provides a terrific database that you can put the results into and the database has macros that will do the analysis. You also will have the option, I believe, and Joann can answer this, of doing this online, so that is another option. I don't know if that is available yet.
Joann Sorra: The Agency for Healthcare Research and Quality does not support a sort of central Web survey. What was done in the pilot test was that one of the systems actually had very good Internet access in their medical offices and they administered this survey themselves through the Web, so we know that the Web is going to be a mode that increasingly medical offices are going to be able to access. What we found in the pilot was that we had a better response rate on paper and that you still are going to have a little more difficult time, even in hospitals, getting the response rate that you can get on paper with the Web. Paper is still the best way to go for a higher response rate.
Next question. Once a survey is administered and results are received, will there be help from AHRQ to address areas of concerns? I can probably answer this question. Westat is the support contract for AHRQ to provide assistance to users of the survey. Right now, there is a medical office resource list that is on the AHRQ Web site and it lists dozens of free online resources that address the various areas that are assessed in the survey. So I recommend that you go to the AHRQ Web site to the Medical Office Survey page and then look at the medical office resource list and check out some of those resources that address things like access and information exchange and followup and I would recommend you start there and then if you have questions, you can send them to the safety culture surveys mailbox and technical assistance line.
The next question. Are there identified best practices that are linked to the survey areas? For example, how to address work pressure and pace. John and L.J., I don't know what your experience has been in terms of once you have the results, what your next steps have been with the medical offices.
John Hickner: Unfortunately, the research is thin in this area and although some organizations do very well and have developed their own internal best practices, I don't think anybody has published much in the way of best practices. As an example, my research group has been working on best ways to follow up on test results, which is a real safety issue. Our group has also done a little work on ensuring accurate medication lists, another big safety area, but by and large I would say that safety research in the office setting when it comes to implementation and best practices is still at an early stage and there is great need for work in that area.
Joann Sorra: L.J., any comments?
Lyle J. (L.J.) Fagnan: I am going to speak as an individual network. We were one of the 11 networks and so we did 36 practices in our network. I think there is an opportunity here. Again, you have to find some support and funding for this, but we have exemplar practices. I'm looking at work pace and pressure, so the average is about 44 percent positive responses but I had a couple practices that were 80 and 90 percent that had positive responses. Others had 22 percent and 15 percent, so it would actually take time to figure out what is going on in those practices. What are the characteristics of those practices that allow them to really do really well or to be more challenged in those areas? I think there is plenty of opportunity to look at it, but we are going to have to get down to the macro system level to understand that.
Joann Sorra: Thank you. Our next question is, if you have an office with fewer than 10 staff, what do you recommend doing with these offices in terms of breaking out the results? Naomi, perhaps you can address this in terms of the rules that we use for the comparative database on hospital versus medical office and kind of the differences in the required Ns.
Naomi Dyer: Sure. For medical office, to get results, all you need are 5 respondents, not 10. With hospital, it is the rule of 10. For the breakout of the different staff positions, if you are looking at your own results, we would say don't do anything with less than three for confidentiality reasons. For the comparative database, as long as you have one, we will include it in the benchmark, since some of the staff positions may have fewer than three in a given medical office.
Joann Sorra: Thank you. Our next question is, do you anticipate that payers or other groups will expect office practices to conduct these surveys as part of recognition, pay for performance, contractual obligations? John or L.J., any thoughts on that?
John Hickner: The answer right now is yes, if the practices belong and are certified by the Joint Commission, because the Joint Commission now does require periodic surveys of safety culture. For example, Cleveland Clinic just did their most recent survey this past summer, which included not just the hospital employees but all the offices. In smaller offices, I think no, but I would guess that eventually all accredited practices will need to do some kind of assessment of safety culture on a periodic basis. I don't know if that's once a year, once every 2 years, or what.
Lyle J. (L.J.) Fagnan: To my knowledge and maybe there are other tools out there, but this is the only tool I'm aware of that really measures the experience of care from the people that actually work in the office. We have patient experience of care tools, and we have clinical measure qualities, but this one actually has people respond based on their perceptions of what's going on in these various domains. I think it's incredibly valuable. I think it is mostly useful internally for practices to start to reflect on how they are doing and to enhance their communication structure. I am not sure how much payment's going to be attached to this, but I think it's a tremendous tool for practices to engage each other in talking about these various quality and safety domains.
Joann Sorra: Thank you, L.J., John, and Naomi. Before we close we ask that you please take a minute to complete an evaluation of today's webinar which will automatically appear on your screen in just a moment. Your feedback is very important to us. We are also providing a technical assistance E-mail address and toll-free number if you have any further questions or comments on the Medical Office Survey and we've also provided the link to the medical office survey pages on the AHRQ Web site. A big thank you to our speakers and I definitely want to thank you all for participating in today's webinar.
