Section 1. Methods

Users of Public Reports of Hospital Quality: Who, What, Why, and How?

Population and Setting

Report sponsors participating in this project all have online public reports of hospital quality and are Chartered Value Exchanges (CVEs)—multistakeholder collaboratives with a mission of quality improvement and transparency—or CVE affiliates. Web site sponsors were invited to participate via multiple communication channels and the Web sites of all interested sponsors were included in the project. The group of participating Web sites represents all major regions of the country. Data collection occurred during a 3-month period during February-May 2011.

Three sources of information were used in developing this report: Web analytics, survey responses, and expert review.

Return to Contents

 

Web Analytics

We gathered Web metrics from each Web site, using Google Analytics, a free and commonly used Web analytics service. Participating Web sites inserted the code for tracking the data into their Web site and then excluded traffic from computers internal to their organizations.

The total number of unique visitors to the participating Web sites was 87,249. The number of unique visitors among Web sites varied considerably from 41 to 52,247. Since some of this variation represents differences in the size of the sites' geographic areas and population served, in Table 1 we present the number of visitors per 100,000 Internet-using households. This population-adjusted figure still varied considerably across the participating sites.

 

Table 1. Unique visits to each Web site per 100,000 households with Internet access in the site's geographic area

Web Site Number12345678910111213141516
Unique visitors per 100,000 households with Internet access1.19.730434760617195111136141142377501507

Note: The number of Internet-using households in each geographic area was obtained from the Centers for Disease Control and Prevention.

We analyzed sources of traffic in three broad categories: traffic from search engines (e.g., Google, Yahoo, Bing); referrals from another Web site (e.g., a link in an online newspaper article); or direct entries of the Web site URL into the Internet browser bar or clicks on a direct link in an E-mail or word processed document.

To understand the search terms being used by visitors arriving by search, we analyzed the keywords used to find the site and the frequency of use for any given search term. We first reviewed a list of the 50 most commonly used search terms for visitors to each site (800 search terms in total across the 16 sites) to identify categories of searches that occurred frequently. The categories identified were searches for:

  • The Web site's name.
  • A CVE affiliate's Web site name.
  • A hospital name.
  • A general search for quality information about hospitals.
  • Other searches.

For each Web site, we then analyzed the top 50 search terms for that site and sorted them into these categories.

The analytics also report on visits to site content, including the frequency with which each page on the Web site is viewed. In an attempt to identify the most popular pages, we downloaded information about the top 25 pages viewed for each Web site. However, many of the Web sites did not structure their sites so that views of the hospital quality pages could be separately identified from views of the home page or other more general pages. Since we could not identify specific views of the hospital quality pages, we were not able to assess their popularity relative to other pages or determine which of the hospital quality pages were most viewed.

We looked at bounce rate and absolute numbers of bounced visitors to assess visitor engagement with the Web sites. Bounce rate is defined as the number of visitors who only viewed one page of the Web site before leaving, divided by the number of visitors. Lower bounce rate is considered a sign of higher visitor engagement. The analytics data also included metrics of overall time spent on-site and average number of page views.

We only present bounce rate information because the other two metrics are potentially misleading in comparisons across the different types of Web sites in the group. Some participating Web sites report hospital information exclusively and others report hospital and outpatient information. As a result, the combined reports may have longer times spent on-site and higher page views than the hospital only reports, simply due to content presented rather than due to higher visitor engagement with the site.

Return to Contents

 

Survey

Survey Development

The primary aim of the survey was to provide information on report visitors' use and perceptions of the public reporting Web sites. For those who agreed to take the survey, an initial survey question determined respondent type: patient, friend or family member, health care professional, employer, insurer, member of the media, researcher, patient advocate, foundation staff, lawyer, or government staff/elected official. Next, the survey branched to questions and answer options that were specific to the type of survey respondent.

The survey covered the following topics:

  • Overall experience on the Web site and usability of the site.
  • Purpose of the respondent's visit.
  • Topics or types of information of interest to the respondent.
  • Use of the information to choose a health care professional or change health care professionals.
  • Suggestions for improving the report, and
  • Demographics.

The survey development team was: Naomi S. Bardach and R. Adams Dudley from the University of California, San Francisco; Judith Hibbard from the University of Oregon; and Peggy McNamara and Jan De La Mare from the Agency for Healthcare Research and Quality. After assembling and analyzing a sample of existing online surveys from participating public reporting Web sites, the survey team drafted the survey and vetted it with the participating report sponsors. Subsequently, a series of cognitive interviews was done with 11 potential respondents, including consumers, providers, an employer, and an insurer in order to improve interpretability of the survey questions and response options. Go to AHRQ's Hospital-Public Report (H-PR) Surveys.

Appendix B presents the questions asked of the patient, friend or family member, and health care professional respondents, with statistics about responses aggregated across all participating Web sites.

Survey Implementation

The invitation to take the survey popped up when site visitors arrived on pages of the public report. The invitation interrupted the visitor Web site experience and usually occurred before the visitor had seen any of the Web site, but the survey itself did not appear until after the Web site visitor had concluded the visit and interaction with the Web site.

Report sponsors chose where the survey invitation popped up—some chose to have it open on the first page of the Web site where a visitor arrived, some chose to have it open only on the home page or on pages with hospital quality data, and others chose to have it open only on pages with hospital quality data.

The survey took 2-4 minutes to complete during pilot testing, depending on the respondent type. There were more questions for patient and friend or family member respondents (throughout the report identified as "consumers").

All survey respondents were asked about overall experience and how they rated the site in terms of usability. In order to decrease the burden on consumer respondents, each consumer was asked about only three of the five topics listed below, with the topics selected randomly for each consumer. This led to a smaller number of respondents (approximately 3/5 of all consumers surveyed) for each of the following topics:

  • Purpose of their visit.
  • Topics of interest.
  • Plans for using the information.
  • Suggestions for improving the report, and
  • Demographics.

Interpretation of Survey Findings. Because survey participation was voluntary, the information from the surveys is not necessarily representative of all visitors to the site. For instance, though we have information about the proportion of consumer and health care professional respondents for each Web site, we cannot determine whether the proportions of consumers versus health care professionals are the same for nonrespondents. For instance, it may be that physicians in general are less likely to respond to surveys, so the proportion of physician respondents to the survey may be different than the proportion of physicians visiting the site. However, in this report, we assume that the tendencies of certain populations to respond to surveys are similar across Web sites, so comparisons among Web sites on metrics such as proportions of consumer and health care professional respondents can tell a meaningful story.

For many of the questions, one of the answer options was "Other" and a write-in text box was available. Qualitative analysis of these answers was completed and any write-in answer that fit into one of the preset categories was placed in that category. Additional answer categories were developed for themes that arose frequently.

Throughout the results, we combine the data for the patients and friends and family members in a single group labeled "consumers." We focus on the consumer and health care professional responses in this report since the number of respondents in the other categories was limited, and because consumers and health care professionals are, generally speaking, the major target audiences for report sponsors. We report on the aggregate analysis results as well as patterns among the individual Web sites. The consumer and health care professional perspectives were analyzed at the individual Web site level only for Web sites with at least 20 consumer respondents (n=5 sites, for consumer questions) and at least 15 health care professional respondents (n=5 sites, for health care professional questions).

Survey Responses and Response Rates. The total number of respondents to the survey for all Web sites was 1,034. The number of respondents and response rates varied considerably across the participating sites.

 

Table 2. Absolute number of survey respondents and response rate among visitors who viewed more than one page on the Web site

Web Site Number12345146987101316121115
Number of survey respondents2451226262727282960125133143170221
Response rate (%)*4.5161.24.8300.73.43.1114.53.7124.611101.1

* Response rate defined as number of surveys/number of visitors to the Web site that viewed more than one page.

Return to Contents  

Expert Review of Web Sites

Sponsors of public reporting Web sites make two basic decisions that determine how users experience their data. First, they determine the path or paths available (the "clicks" that must be made) to navigate to performance information. In addition, they choose how to display the information once a visitor arrives at a page reporting quality data. There can be tremendous variation in the decisions Web site sponsors make, and that variation may drive how easy it is to use the Web sites to evaluate hospitals and identify the best.

In order to better understand the differences among participating Web sites, the investigators did an indepth review of each Web site, assessing two groups of characteristics—straightforward characteristics, such as whether visitors have to scroll down to get to quality information; and characteristics that were subject to differences in judgment, such as whether the visual display of performance metrics was inherently meaningful. The choice of characteristics to evaluate was based on the available literature and investigator experience with public reporting.

The characteristics evaluated were defined as follows:

  • Can one select the hospitals for performance display? (yes/no)
  • Can one compare hospitals on one page? (yes/no)
  • Can one compare hospital performance to a benchmark? (yes/no)
  • Can one sort performance results by different criteria (e.g., sorting by hospital name and also sorting by performance on a specific metric such as C-section rate)?
  • Are the metrics shown visually? (yes/no)
  • Is the visual display inherently meaningful? (1=inherently meaningful/very easy to understand; 5=cannot be understood without a legend)
  • Is a composite measure used on the top page of quality information? (yes/no)
  • Are the performance metrics displayed using interpretive labels (e.g., "better," "average," "worse")? (yes/no)
  • Does the Web site use a framework to convey the elements of quality? (investigators defined a framework as a conceptual grouping of measures in a way that helps visitors understand what quality is, identifying the key elements of quality and then using those elements as headings in the display; for example, "patient safety," "effective care," "patient experience") (yes/no)
  • Overall rating of the site's hospital quality information for evaluability (how easily and quickly one can see better and worse options)? (1=very easy to evaluate; 5=very hard to evaluate)

For all 1-5 scale questions, two investigators rated each Web site separately and then reconciled any discrepant answers to arrive at a final score.

Page last reviewed December 2011
Internet Citation: Section 1. Methods: Users of Public Reports of Hospital Quality: Who, What, Why, and How?. December 2011. Agency for Healthcare Research and Quality, Rockville, MD. http://archive.ahrq.gov/professionals/quality-patient-safety/quality-resources/value/pubreportusers/pubusers1.html