Section 5: Determining Where To Focus Efforts To Improve Patient Experience
(Page 1 of 2)
To identify opportunities to improve patient experience and determine where to direct your resources, you can start by reviewing your CAHPS survey results in combination with other forms of patient feedback, both quantitative and qualitative. You can then use a variety of qualitative methods to confirm and gather further insights into specific problems, identify possible solutions, and monitor progress. Because some qualitative methods are easier and less expensive to implement than surveys, they can be used more frequently to provide ongoing feedback valuable to clinicians, administrators, and staff.
This section covers four ways to figure out which aspects of patient experience could and should be improved:
- Analyze CAHPS survey results to understand your organization’s performance.
- Analyze other sources of data for related information.
- Evaluate the process of care delivery.
- Gather input from stakeholders.
Once you have identified the aspects of patient experience for which you want to develop improvement activities, you will have to decide where exactly to focus your resources. Considerations include how widespread the problem is, how different your score is from others (i.e., the size of the opportunity to improve), the nature of current improvement activities, and the importance of the issue based on other forms of patient feedback.
Once you have results from a CAHPS survey in hand, you can start by seeing where your scores appear low relative to other composite measures in the survey. You can then conduct different kinds of analyses to identify your organization's relative strengths and weaknesses:
- Compare your CAHPS scores to benchmarks.
- Compare your current CAHPS scores to past performance.
- Assess which aspects of performance are most relevant to your members or patients.
Each kind of analysis provides a different perspective on performance. In some cases, you may be able to obtain sufficient information from using just one or two of these methods.
5.A.1. Compare Your CAHPS Scores to Benchmarks
One way to get the information you need to identify specific problem areas, formulate an improvement plan, and select appropriate strategies is to compare your performance to others. To do that, you need to identify benchmarks or comparative data that are appropriate and relevant for your organization. A benchmark could be a regional or national average, the average score for the same type of organization, or a "stretch goal," such as the score achieved by the top performers. Your benchmark choices should be guided by your business strategy and improvement goals.
Major sources of comparative benchmarks include:
- CAHPS Database (for both the Clinician & Group Survey and the Health Plan Survey (for Medicaid, CHIP, and Medicare plans)
- National Committee for Quality Assurance's (NCQA) Quality Compass (Health Plan Survey)
- Centers for Medicare & Medicaid Services (Health Plan Survey for Medicare only)
Other sources include:
- Your survey vendor. Many vendors offer access to comparison norms for their clients.
- Community-level data. Depending on the nature of quality measurement activities in your State or region, you may have access to benchmarks specifically for local providers. For example, several multi-stakeholder collaborative organizations gather and report comparative CAHPS results at the clinic site or individual physician level. (Learn about regional health improvement collaboratives.)
When comparing your results to a benchmark, keep in mind that the benchmark provides only a relative comparison. Even though your results may be better than the average score, for example, you may believe there is room for improvement in a particular area in an absolute sense. In fact, there may be some aspects of patient experience measured by the CAHPS survey that even the highest scoring sites could improve on.
There are many ways to analyze your CAHPS results in comparison to benchmarks or other reference points. There is no "right" approach, and the selection of methods for data scoring and presentation will depend on both the benchmarks you choose to use and the level of detail needed by your audience. Following are several examples of different approaches for comparing CAHPS survey results to benchmarks. These examples draw on survey results from the Clinician & Group Survey but apply as well to the Health Plan Survey.
5.A.1.a. Comparing Mean Scores
The simplest place to start is to compare the organization's mean scores for the CG-CAHPS composite and rating measures with the average mean score for comparable entities (e.g., other physician practices, medical groups, or health plans), as illustrated in Figure 5-1. As can be seen in this example, a practice site's mean score for the Provider Communication composite measure (3.64) is significantly higher than the mean for the medical group (3.44), yet its mean score for the Provider Rating (8.21) is significantly lower than the mean for the group (8.74). The site is not significantly different from the group on the other two composites. The horizontal lines for each composite in the "Comparison to the Group Mean" column show the minimum site score and the maximum site score within that group.
For the purposes of comparing composite measures and rating items that have different response categories, Figure 5-2 shows the same data with the mean scores normalized to a 0-100 scale. (Learn about normalizing scores in the box below.)
5.A.1.b. Comparing "Top Box" Scores to Benchmarks
Another option is to compare the percent of responses in the best possible category for a survey question or composite measure (i.e., the "top box" score) to one or more benchmarks. The CAHPS Database uses this method in one of the displays included in its online reporting system.
Table 5-1 illustrates a comparison of scores for a sample medical group on the CAHPS Database Submitter's Site for the Access composite measure ("Getting Timely Appointments, Care, and Information") and its individual items in the Clinician & Group Survey 2.0. The medical group scores (in the shaded column) are compared to the overall average of scores in the CAHPS Database and to selected percentile scores. (See the box below for an explanation of percentile scores.)
|Composite/Item||Selected Group/Site||CAHPS DB Overall||90th Percentile||75th Percentile||50th Percentile||25th Percentile|
|Getting Timely Appointments, Care and Information||58%||59%||73%||66%||59%||52%|
|Got appointment for urgent care as soon as needed||64%||64%||81%||74%||66%||58%|
|Got appointment for check-up or routine care as soon as needed||69%||68%||83%||77%||71%||63%|
|Got answer to phone question during regular office hours on same day||53%||59%||78%||69%||60%||52%|
|Got answer to phone question after hours as soon as needed||63%||59%||80%||68%||58%||48%|
|Wait time to be seen within 15 minutes of appointment time||41%||43%||61%||52%||43%||33%|
Source: CAHPS Database Submitter's Site for the CAHPS Clinician & Group Survey 2.0
By comparing your organization’s top box score for a composite measure and its items to the mean top box score (CAHPS DB Overall) and the percentile scores, you can determine where your organization can improve. For example, the sample comparison in Table 5-1 shows that the medical group's scores for the Access composite measure and its items are roughly in line with the mean score, with the exception of the item, "Got answer to phone question during regular office hours on same day." The medical group's top box score of 53% for this question is close to the national 25th percentile score of 52%, suggesting the need to investigate factors that may be influencing this lower score.
One way to identify what is driving a relatively low score for a large organization is to look at the scores for its components. By calculating benchmark scores for a large organization, such as a health plan, health system, or medical group, you can see how entities within the organization compare to each other. For example, if the medical group in the example above submitted data to the CAHPS Database for several practice sites, the group and its practices could see a display of bar charts showing the full distribution of scores for each practice site. As illustrated in Figure 5-3, among the sample medical group's three practice sites, Practice Site A has the lowest top box score for the question related to getting an answer to a phone question during regular office hours on the same day. In addition, the down arrow indicates that the mean score for Practice Site A is below the average for all practice sites included in the CAHPS Database, calculated at the 0.05 significance level. This type of comparison would allow the medical group to pinpoint improvement opportunities at particular practice sites.
For more information on using the CAHPS Database to compare CAHPS results for both health plan and medical groups, explore the CAHPS Database Online Reporting System
For more information on the pros and cons of different scoring and comparison methods for CG-CAHPS Survey results, read:
- Aggregating and Analyzing CAHPS Clinician & Group Survey Results: A Decision Guide.
- Developing a Public Report for the CAHPS Clinician & Group Survey: A Decision Guide
5.A.2. Compare Your Current CAHPS Scores to Past Performance
If you have collected CAHPS survey results more than once, another useful way to identify opportunities for improvement is to look at past performance. Comparing your current scores to previous scores can be valuable for:
- Detecting areas where your performance is improving, declining, or holding steady.
- Increasing your confidence that the scores reveal a true picture of performance and are not just a snapshot of performance at a single point in time.
Figures 5-4 and 5-5 present two sample displays to examine CAHPS data over time. In Figure 5-4, bar graphs show trends in "top box" scores from 2010-2014 for the four Health Plan Survey composite measures and two rating items.
Figure 5-5 shows the same data using line charts to plot the trends over time. With the line charts, it was necessary to alter the y-axis so that it starts at 50% and goes to 100%. Because most of the scores clustered within 30 percentage points of each other, this change to the axis makes it easier to see the differences in scores across the measures.
5.A.3. Assess Which Aspects of Patient Experience Are Most Important to Your Members or Patients
Another method you can use to help determine what specific issues to focus on for improvement involves identifying the factors that are most important to members or patients. This analysis of the "importance" of topics in the CAHPS survey—sometimes referred to as a "key driver" analysis—requires an assessment of how strongly a score for a particular question or composite measure is associated with patients' or enrollees' overall rating of their health plan or medical practice. This type of analysis can be conducted with data from multiple groups, sites, or plans.
The statistic commonly used to assess such associations is called a correlation coefficient, which can range from –1.0 to +1.0 (see box below for information about interpreting this statistic). There are several methods for calculating correlations; the method that is recommended for CAHPS scores is the Spearman correlation, but other methods may also be useful.
The following examples illustrate the results of a key driver analysis for the Health Plan Survey and the Clinician & Group Survey. These correlations do not necessarily apply to your implementation of a CAHPS survey; it is important to analyze your own data for such correlations because they can be different for each sample.
5.A.3.a. Correlation Coefficients for the CAHPS Health Plan Survey
Table 5-2 below presents Spearman correlations between the Health Plan Survey composite measures and the overall ratings of doctor, care, plan, and specialist. As has been found in previous analyses, the strongest relationship was between the Doctor Communication composite and the Doctor Rating.
|Composite measure||Doctor rating||Care rating||Plan rating||Specialist rating|
|Getting needed care||0.53||0.68||0.57||0.43|
|Getting care quickly||0.48||0.61||0.48||0.31|
|How well doctors communicate||0.69||0.67||0.44||0.39|
Note: All correlations are statistically significant (p < .001). Data for analyses came 122 health plans that administered the Health Plan Adult Medicaid Survey.
5.A.3.b. Correlation Coefficients for the CAHPS Clinician & Group Survey
Table 5-3 presents Spearman correlations between the composite measures from the Clinician & Group Survey 2.0 with supplemental Patient-Centered Medical Home (PCMH) items and the overall rating of the provider. Consistent with the example of the Health Plan Survey above, the data indicate a very strong association between the Provider Communication composite and the Provider Rating and strong but slightly smaller relationships between Access to Care and Office Staff scores and the Provider Rating. The correlations for the three PCMH supplemental composites are much lower than those for the core composites.
|Composite measure||Provider rating|
|Getting timely appointments, care, and information||0.61|
|How well doctors communicate with patients||0.87|
|Office Staff: Helpful, courteous, and respectful office staff||0.66|
|Talking with you about taking care of your own health (PCMH)||0.38|
|Attention to your mental or emotional health (PCMH)||0.17|
|Talking about medication decisions (PCMH)||0.52|
Note: All correlations are statistically significant (p < .01). Data for analyses came from 714 practice sites that administered the Clinician & Group PCMH Survey 2.0.
5.A.3.c. Creating a Priority Matrix
One very useful way to hone in on areas for improvement is to plot a "priority matrix" that graphically displays relative performance on the composite measures along with the relative "importance" of the composite measure as it relates to an overall rating of care.
Using an example based on the CG-CAHPS survey with PCMH supplemental items (shown in Figure 5-6), a priority matrix plots the following two variables:
Relative Performance on the Y-Axis. On the Y-axis, the chart displays where the practice site's scores stand in relation to all other practices included in the survey. That is, scores below the "50" line denote measures for which the practice's performance is below the 50th percentile, and those above the 50 line denote measures for which the practice's performance is above the 50th percentile.
Relative Importance on the X-Axis. On the X-axis, the chart shows the relationship between each survey measure and patients’ overall rating of the provider, as measured by the correlation coefficient discussed above. The further to the right a measure is on the chart, the more strongly it is associated with the provider rating. The vertical line at 0.6 illustrates one way to differentiate higher and lower correlations, as correlations at or above 0.6 signify a strong association.
Combining these two pieces of information into a matrix, as shown in Figure 5-6, can help you identify priority areas for improvement in the practice. For example, measures in the bottom right quadrant reflect those that should probably be the highest priorities for improvement in that they are both important to patients (as revealed by high correlations with patients' rating of the provider) and areas in which the practice performed below the 50th percentile. The other quadrants convey similar information about how the practice performed on each aspect of care and the relative importance of this area to patients. Note that Figure 5-6 is an illustrative example; where you choose to place the lines to form the quadrants should be based on your own goals and priorities.
These kinds of analyses and graphical representations of relationships are not difficult to do, but they do require time and access to analytical support. Many survey vendors are capable of providing these services as part of the CAHPS data collection and reporting process.
Once you have compared your CAHPS scores to your previous scores and/or relevant benchmarks (e.g., national, regional, or other comparison group of interest), you may want to review related information to confirm your findings and identify steps you could take to improve patient experience. Sources of information that could be helpful for this purpose include complaints and compliments, patients' comments, and administrative data.
Health plans and providers typically have access to or can easily gather various types of administrative data that you can "mine" to determine which performance issues may be affecting your CAHPS scores. Examples of sources of administrative data include:
- Telephone logs
- Employee work hours
- Visit appointment records
The types of data you choose to use for further analysis will depend on the issues you identified when examining your CAHPS results. For example, if you are interested in improving patients' experiences in getting appointments when needed, you could:
- Examine visit appointment records to assess missed appointments.
- Analyze telephone logs to assess how many dropped calls or failed appointment queries occurred.
- Analyze visit appointment records to determine the amount of time between scheduling an appointment and the actual appointment date.
- Search your complaint records and tabulate the number of complaints received about appointment problems.
Page originally created November 2015