Evaluating a CAHPS Report
An evaluation of your CAHPS report after it has been released or distributed can help sponsors and their vendors assess how effectively they achieved their goals. This step is related to but separate from the testing that the CAHPS team encourages survey users to do during the report development stage to ensure that their choices—with regards to content, graphic displays, dissemination strategies, etc.—are appropriate for their audiences and environments.
Through the evaluation process, sponsors can explore issues such as:
- Whether and to what extent they are reaching the targeted audience, and
- Whether and how the information is actually being used.
This feedback can help sponsors determine what could be done to help their audiences better understand and use the information next time.
Evaluations are likely to be most informative if they are planned in advance and supported with an adequate budget. However, conducting an evaluation is often considered unnecessary or a luxury, so it is not included in the report planning and budgeting process. In reality, the lessons learned from an evaluation can be critical to designing a meaningful report and may save time and money in the long-term.
To learn more, go to Evaluating Results in AHRQ’s TalkingQuality Web site.
Caveat About Unsolicited Feedback
It is important to realize that the unsolicited feedback you might randomly receive is not an evaluation. Neither is the more casual feedback you might solicit from convenient sources. While this type of feedback can be informative, you will not necessarily learn from it all that you need to know. Consumers who find a report confusing or uninteresting will probably just stop reading it, rather than taking the time to submit a thoughtful critique of what confused them or failed to engage them. Moreover, unsolicited feedback may be from people who are not typical readers of the report so their opinions may not represent your average report user.
How to Evaluate
Several methods are available for evaluating quality reports, some more affordable than others. Options include focus groups, one-on-one interviews, and on-line surveys. The method that is most appropriate will depend on what you are hoping to learn from the evaluation (e.g., whether users interpreted the information appropriately, whether they were able to find information on a Web site, who is using the information).
Focus Groups and Interviews
These two methods allow you to get firsthand, detailed feedback from a sample of your intended audience. A disadvantage of using these methods is that the resources required to identify a sample, recruit participants, and conduct the interviews can be prohibitive. This said, even just one focus group can provide valuable information.
This kind of qualitative testing is typically done by an outside consultant or vendor. To manage the costs, explore the skill sets of your internal staff or discuss data collection possibilities with your CAHPS vendor; it may be less expensive and less labor-intensive than you think to collect useful data directly from your intended audience.
Another possibility is to design an online survey that “pops up” when a user is viewing a Web-based report card. Such surveys have the advantage of being low-cost (relative to focus groups and in-person interviews) and collecting information at the point when the user is most engaged with the report. A key drawback of this method is that data can be collected only from those viewing the report online.
Read more about using online surveys to evaluate quality reports at http://archive.ahrq.gov/professionals/quality-patient-safety/quality-resources/value/publicsurveys/.
If an Evaluation Isn’t Feasible
If your resources are limited, a limited evaluation can be done – and will contribute to the project. If advance planning and a dedicated budget are not possible, report card designers should at least document the lessons that they learned in the process of designing and disseminating their report cards (e.g., by keeping simple notes). This should be done during the reporting cycle or very soon after the cycle is complete to ensure that key lessons learned are not lost and can be drawn upon in the next reporting cycle. The long-term payback of even very simple notes on what was learned in the reporting process can be significant.
Presentations and Papers Based on Evaluations
A sample of presentations and published papers based on evaluations include the following:
- Carman KL, Farley Short P, Farley DO, Schnaier JA., Elliott DB, Gallagher P. Epilogue: Early lessons from CAHPS Demonstrations and Evaluations-Consumer Assessment of Health Plans Study. Med Care. 1999 Mar;37(3 Suppl):MS97-105.
- Carman KL, Hibbard JH. Report testing and evaluation: Producing reports that people understand and use. 2004, Ninth National CAHPS User Group Meeting, Baltimore, MD.
- Damiano PC, Willard JC, Tyler MC, Momany ET, Hays RD, Kanouse DE, Farley DO. CAHPS in practice: the Iowa demonstration. J Ambul Care Manage. Apr 2002. 25(2), 32-42.
- Elliott MN, Farley D, Hambarsoomians K, Hays RD. Do Medicaid and commercial CAHPS scores correlate within plans?: a New Jersey case study. Med Care. Oct 2005. 43(10), 1027-33.
- Hibbard JH, Berkman N, McCormack LA, Jael E. The impact of a CAHPS report on employee knowledge, beliefs, and decisions. Med Care Res Rev. 2002 Mar;59(1):104-16.
- Kanouse DE, Spranca M, and Vaiana M. Reporting about Health Care Quality: A Guide to the Galaxy. Health Promot Pract. 2004 Jul;5(3):222-31.
- Farley DO, Elliott MN, Short PF, Damiano P, Kanouse DE, Hays RD. Effect of CAHPS performance information on health plan choices by Iowa Medicaid beneficiaries. Med Care Res Rev. Sep 2002. 59(3), 319-36.
Page originally created October 2012