Page 1 of 1

Chapter 2

Evaluation of AHRQ's Pharmaceutical Outcomes Portfolio

Chapter 2. Methods

The evaluation methods and data sources included: social network analysis (SNA), site visits and telephone calls to CERTs and individual grantees, discussion with six different stakeholder groups, document review, case studies, and an appreciative inquiry (AI) exercise. Each of these sources is described below. The diverse data sources, methods, and stakeholder discussions enabled us to compare and verify independent sources of data ("triangulate") to strengthen the validity of the evaluation. The site visits and telephone interviews provided information used for most of the evaluation objectives, while certain techniques such as SNA and AI were targeted to only one or two evaluation objectives. Exhibit 1 summarizes the relationship of each of these components to the overall evaluation.

2.1. Social Network Analysis

2.1.1. Purpose and Objectives

The organizational rationale of the CERTs program's "center mechanism" is to spread best practices within the framework of their partnerships, to coordinate resources (e.g. education, databases, administration), and to encourage inter- or cross-disciplinary work, with the goal of improving the understanding and use of pharmacological therapies. SNA labels such networks as "ego" networks, because they focus on understanding each of the egos (i.e. each individual CERT). We used SNA6 to understand the relationships between organizations ("nodes") through visual representations of linkages between them (e.g. contacts and collaborations) and through quantitative network measures. Characterizing and mapping the structure of the overall network and the relationship of individual organizations or entities within it can help to understand CERTs collaborative processes. Our approach to social network analysis included the characterization of the relationships within and between the individual CERTs, the Coordinating Center, the Steering Committee, and other partnering organizations (i.e., government, non-profit, and for-profit entities).

We defined the CERTs network as the Coordinating Center, the CERTs Steering Committee, the individual CERTs (Alabama, Arizona, Duke, Harvard, Penn, UNC, and Vanderbilt) and other CERTs stakeholders including government agencies and partnering organizations. We sampled each of the seven individual CERTs, the Coordinating Center, and the members of the Steering Committee. We used the UCINET 6 software package to draw sociograms (network graphs) and to calculate quantitative network measurements.

Based on discussions with the client, the analysis focused primarily on the CERT network as a whole and secondarily on the networks of each of the CERTs and the CERT Coordinating Center. The analysis of networks was guided by the following questions:

  • What does the CERT network look like?
  • What is the shape of the individual CERTs networks and how does that relate to their research focus?
  • Who are the key entities within each of the CERTs' individual networks?
  • What is the level of interdependence or independence of the different CERTs network actors from each other?

Return to Contents

2.2. Site Visits and Discussions

We conducted discussions and site visits with six Portfolio stakeholder groups: CERTs investigators, Portfolio grantees, AHRQ representative, Steering Committee members, CERTs partners, and policymakers. Forty-eight individuals associated with the Portfolio were interviewed. Exhibit 2 shows the distribution of stakeholders by type.

Exhibit 2: Distribution of Stakeholder Discussions

Stakeholder GroupRespondents
CERTs or CC38
Portfolio Grantees4
AHRQ1
Steering Committee1
Partners1
Policymakers3
Total48

Steering Committee members interviewed were the chair, two outside policymakers, and the principal investigators (PI) of the CERTs research centers and the CC. We visited on site four of the CERTs research centers (Duke, HMO, Penn, and UNC) and the CERT Coordinating Center. The site visits included in-person semi-structured discussions with the principal investigators (PI), CERTs investigators, and staff at each center. We conducted telephone discussions with the CERT PIs and other investigators at the three other centers (Arizona, UAB, & Vanderbilt).

Exhibit 3 shows the distribution of respondents across the CERTs.

Exhibit 3: Respondents by CERT

CERTs#%
Arizona410.5%
Duke718.4%
HMO410.5%
Penn513.2%
UNC821.1%
UAB410.5%
Vanderbilt25.3%
CC410.5%
Total38100.0%

In addition to the 48 individuals who were interviewed, several additional external respondents with no affiliation to the Portfolio were sought to discuss briefly their familiarity with the CERTs program and AHRQ's pharmaceutical work. Of those respondents, one was from a university-affiliated medical school pharmacy program and was familiar enough with the program to answer questions, and another was from a federal agency who indicated some knowledge (but ending a year and a half ago) of the program.

Purposive sampling was used to select respondents who were knowledgeable, involved participants in the work of their organizations. Different purposive sampling strategies and criteria were used for each of the six stakeholder groups. For all stakeholder groups except external respondents, we compiled a list of potential respondents in each group (e.g. CERTs investigators, Steering Committee members, Portfolio grantees) from administrative documents and public data sources (e.g. CERTs Web site). AHRQ's representative was selected based on degree of involvement with the CERTs program. For the CERTs discussions, PIs were always selected; other investigators were selected based on their perceived involvement with their CERT (based on Web sites, project and publication databases) supplemented by CERT staff recommendations when needed. Participation was also affected by the availability of individuals on the day(s) of the site visit. We selected the Steering Committee chair and other stakeholders (except CERT PIs) based on their organizational affiliation to obtain representation from key stakeholder and potential end users of CERTs education and research initiatives.

Individual Portfolio grantees were selected based on the attributes of their grant research. The Portfolio applications included 22 grants, 8 of which were the CERTs applications which were excluded. Two grants awarded to CERTs investigators who had already been selected were excluded. Since the evaluation period was for 2002-2005, 4 grants not completed at the time of the evaluation were excluded. One grant was initiated and completed within the evaluation period and was selected; however the remaining grant time periods overlapped on either end of the evaluation time period (2002-2005) but were completed by the time of the interviews (Fall 2006). One investigator held 2 grants and was selected. The remaining respondents were selected based on whether their research focused on a topic relevant to the PART goals. Using this approach, we selected 4 grantees representing 5 Portfolio grants.

We identified Partners based on their connection to a CERT project that was selected as a case study. Policymakers were selected based on the organization they represented and its relevance as an end user of the CERTs research. Two policy makers were also members of the CERT Steering Committee. Outside respondents not affiliated with the CERTs program were selected from a list of referrals from evaluation team member contacts.

Return to Contents

2.3. Document Review

We collected and used relevant and available program and supporting documents: Investigator Annual Progress Reports to AHRQ; other administrative program documents and databases, and relevant documents external to the program. One important use of the document review was to provide background information on the research to facilitate the discussion process.

Return to Contents

2.4. Impact Case Studies

2.4.1. Purpose and Objectives

We developed case studies to assess the impact of several key CERTs projects. A primary objective of the case studies was to assess the impact of Portfolio research on state and federal health care policy making by identifying and describing where Portfolio research findings had a substantive impact on policy. The evaluation study questions addressed at least in part by the impact case studies were:

  1. What have been the program impacts?
  2. Have outputs/outcomes had impacts on clinical practice, policies?
  3. Do program outcomes/impacts reflect program goals and AHRQ/DHHS priorities?

A secondary objective of the case studies was to identify, if possible, the mechanisms that led CERTs' projects to have the impact that they did. Four impact case studies were chosen using criteria described below.

2.4.2. Case Study Selection

The goal of the case study selection process was to identify a subset of the most potentially relevant case studies from a list of 296 CERTs projects. From this subset a purposive sample of four case studies was selected based on input from the Abt research team, Dr. Sheila Weiss (Abt's consulting pharmacoepidemiologist), and AHRQ. The first phase of case study selection involved applying inclusion and exclusion criteria based on the proposed evaluation plan and timeframe of the evaluation. The second phase involved characterizing and coding the projects that met the inclusion and exclusion criteria based on relevant characteristics. The third phase was selecting with AHRQ the final 4 projects for development into the case studies. The various data sources that were used for case study selection and nomination were:

  • CERTs project database from Coordinating Center.
  • CERTs publications and presentations.
  • Review of other CERTs documents (e.g. progress reports, strategic plans).
  • Data obtained from discussions with CERTs investigators.

Case studies were selected from the CERTs project database maintained and provided by the Coordinating Center received in January 2006. The database included 296 projects, to which the inclusion criteria were applied. The project had to be a "core" CERTs project (i.e. funded at least in part by an AHRQ CERTs grant or supported at least in part by the administrative core funded by an AHRQ CERTs grant)7 in the Coordinating Center database and marked completed8 in the Coordinating Center database as of January 2006. From the original list, 127 projects qualified. We applied the following exclusion criteria: From the original list, 127 projects qualified. We applied the following exclusion criteria:

  • The project had associated publications published outside the period 2002-2005.9
  • The project was completed10 outside the evaluation period 2002 to 2005.
  • The date of the project was unclear from the database, but was mentioned in the 2001-2002 Annual Report.
  • The project was a feasibility study, workshop, think tank, or involved committee participation.
  • There was no associated publication, and the project was not identified as having an impact by colleagues within their own or other CERTs.

The 68 projects remaining after the application of these criteria were coded and classified as follows:11

  • CERT(s) and CERTs investigator(s) involved.
  • Output types (e.g. research publication, curriculum, guideline).
  • Level of Impact (Tunis and Stryer classification scheme).12
  • Highest Location of Impact (national > regional > local).
  • PART Goals addressed.
  • AHRQ Pharmaceutical Outcomes Portfolio goals addressed.
  • CERT program aims addressed.
  • Stakeholder groups impacted (e.g. professional society, government agency).
  • Acknowledgement and description of the project's impact from the CERTs investigators.
  • Additional characteristics of the project that support its impact or can further guide selection and nomination of case studies.

Sarah Shoemaker (pharmacist and researcher) and Sheila Weiss (consultant pharmacoepidemiologist and researcher) reviewed these projects using the following criteria:

  • The project meets greater than 1 AHRQ Portfolio goal.
  • The project meets at least one of the CERTs aims.
  • The project is valuable research that changed policy or practice (a Level 2 or 3 impact13).
  • The impact of the project is already known (e.g. change in guidelines, policies).

The seven nominated case studies were provided to AHRQ along with the characteristics of the case studies (described above). We targeted a subset of four of the seven representing diversity across:

  • CERTs involved.
  • Output types (e.g., curricula, reports/publications, tools).
  • Perceived impact (i.e. level of impact).
  • Location of impact (e.g. national, state).
  • Publicized and unpublicized impacts.
  • CERTs, Outcomes Portfolio, AHRQ, and PART goals addressed.

The following 4 cases were selected to provide examples of CERT research findings, the impact of those findings, and to identify potential mechanisms of impact: FDA Black Box warning by Dr. Wagner at the HMO CERT; QT Prolongation study by Nancy Allen-LaPointe at Duke; Tensions in Antibiotic Prescribing by Dr. Metlay at Penn; and Rickets, Vitamin D, and AAP Guidelines work of Dr. Davenport and Calikoglu at UNC.

Return to Contents

2.5. Appreciative Inquiry

2.5.1. Purpose

The purpose of the overall evaluation was to analyze the impact of AHRQ's Pharmaceutical Outcomes Portfolio and to determine if the program has been moving toward its goals. While traditional evaluation techniques can identify problems and, when appropriately designed, areas of strength, "Appreciative Inquiry," the technique used for this portion of the evaluation is literally designed to focus on identifying those aspects of the program's foundation that have promise for the future. In addition, this technique can help encourage favorable organizational change among Portfolio stakeholders.

Appreciative Inquiry is a qualitative research technique centered on the belief that organizations have an infinite capacity to learn, innovate and create, and that they are much more likely to change in a positive and meaningful way if they explore all the things that are "right" within their organization as opposed to "wrong." AI encourages organizations to focus on possibilities rather than problems. It focuses on what is best in organizations14 and has been used in healthcare.15 We used AI to support he evaluation of AHRQ's Pharmaceutical Outcomes Portfolio while encouraging positive organizational change by providing a structured format for AHRQ and CERTs respondents to articulate and build upon their personal, professional and organizational strengths. This methodology was designed to encourage creative thinking that would lead to ideas, solutions and ultimately a plan to further strengthen the Portfolio. The AI "workshop" was designed to answer the following research questions: (1) What do various stakeholders view as the most successful processes and outcomes of the CERTs, and (2) How can this information be used to maximize, leverage, or build upon success in the future?

Return to Contents
Proceed to Next Section

 

Current as of December 2007
Internet Citation: Chapter 2: Evaluation of AHRQ's Pharmaceutical Outcomes Portfolio. December 2007. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/research/findings/final-reports/pharmportfolio/chapter2.html