Health Care Systems for Tracking Colorectal Cancer Screening Tests

3. Assessment Plan and Methodology

We selected the Practical, Robust Implementation and Sustainability Model (PRISM) (Feldstein and Glasgow, et al., 2008) as a framework to guide our evaluation. This framework highlights factors that affect the outcome of an intervention or program (context domain elements) and incorporates measures of success (outcome domain elements). The framework can be used to guide the design and development of programs or interventions as well as to assess their implementation and outcome. 

PRISM Model 

When used as a guide to development, the PRISM framework identifies design elements and internal and external considerations that should be addressed and provides a set of questions to help developers address them. When using the framework as a guide to assess performance and outcome, evaluators can use the same elements and considerations to ascertain how they affected performance. Evaluators can also use the incorporated RE-AIM outcome measures of Reach, Effectiveness, Adoption, Implementation, and Maintenance (Glasgow, et al., 1999) to gauge the program's or intervention's impact. Figure 3.1, from Feldstein and Glasgow (2008), illustrates the various elements of PRISM and their interrelationships.

The context domain of the PRISM model consists of four elements: intervention, external environment, implementation and sustainability infrastructure, and recipients. The Intervention element includes the nature and design of the intervention from the perspective both of the organization in which it is delivered or implemented and of the patient receiving it. From the organization's perspective, PRISM suggests the organization consider such factors as the degree of readiness the intervention requires, the usability and adaptability of the intervention to organizational conditions, and the burden the intervention places on the organization. From the patient's perspective, factors to consider include the patient centeredness of the intervention, the degree to which the intervention provides choices and addresses access and other barriers, and the burden (complexity and cost) the intervention places on the patient.

Considerations under External Environment include market forces and conditions, prevailing health care regulations and policies, and community resources. Implementation and Sustainability Infrastructure refers to such factors as having a dedicated implementation and sustainability team, training and support for implementers and adopters, and a flexible implementation and sustainability plan.

Like Intervention, the Recipient element has an organizational and a patient component. The organizational component refers to characteristics of an organization that may affect its ability to successfully deliver or implement the intervention. It includes such factors as organizational culture, clinical leadership, data and decision support, and systems of care. The patient component refers to characteristics of patients that may affect the intervention's ability to be successful with them. It includes such factors as demographics (especially age and gender), socioeconomics (especially education and insurance status), health status, and health knowledge and beliefs.

Each of the four context domain elements affect the intervention's performance, which is evaluated within PRISM's outcome domain consisting of the five RE-AIM elements (Reach, Effectiveness, Adoption, Implementation, and Maintenance). As shown in Figure 3.1, the context domain affects the latter three elements, which in turn affect the former two. Taken together, these five elements "represent the overall public health impact of a program or policy" (Belza, et al.).

Adoption refers to the participation rate among potential settings and "intervention agents" for implementing or delivering an intervention and the representativeness of those settings and agents. A key concern for adoption is whether the intervention can be adopted by a wide range of settings or whether only those with certain characteristics (such as strong financial resources or a functioning electronic medical record) adopt it. Implementation refers to both the fidelity of implementation (the degree to which the implemented intervention matches the intended intervention) and the consistency of implementation across settings and agents. Maintenance applies both to intervention settings (the extent to which an intervention becomes institutionalized into the settings' routine) and to intervention recipients (the long-term effects of the intervention on those exposed to it in terms of intended outcomes and quality of life).

Reach refers to the participation rate among potential or targeted recipients of the intervention and the representativeness of those who participate. Like Adoption, a key concern for Reach is whether all segments across a wide range of targeted participants will actually participate in an intervention or whether only those with certain kinds of characteristics (such as financial resources) will participate. Finally, Effectiveness refers to an intervention's outcome—its ability to achieve its intended (positive) impact without additionally causing (negative) unintended effects or adverse consequences.

Return to Contents 

Assessment Design

We assessed the PRISM context domain as it relates to our implementation of the SATIS-PHI/CRC intervention by gathering information about:

  • Perspectives of participating practices and patients regarding the intervention.
  • Prevailing conditions and events occurring in the external environment that could affect the implementation or outcome of the intervention.
  • Relevant infrastructure at the intervention's central entity (LVPHO and EPICNet) and at participating primary care practices to carry out and sustain the implementation.
  • Characteristics of participating practices and of the intervention patient population that also could affect the intervention's implementation or outcome.

We took a descriptive and often qualitative approach to assessing the context domain, seeking to understand the various contextual elements and how they likely affected implementation and outcome.

We assessed the PRISM outcome domain for SATIS-PHI/CRC—its overall public health impact—through a mixture of the same descriptive approach as above for Adoption, Implementation, and Maintenance. We took a more quantitative quasi-experimental approach for Reach and Effectiveness. This method is in keeping with the requirement stated in our task order contract directing us to use (1) quasi-experimental methods to evaluate the intervention's impact comparing pre- and postintervention measures at intervention and comparison (control) practice sites and (2) qualitative methods to evaluate the implementation process.

We conducted a quasi-experimental evaluation of the SATIS-PHI/CRC intervention with patients of 20 primary care practices assigned to either of two intervention arms or to the control arm. Assignment was by a mixture of cluster allocation of practices to either the intervention (15 practices) or control (5 practices) arm and subsequent randomization of patients within two selected intervention practices to the two intervention arms: receive a mailing with a card to be mailed back to request a stool test kit or receive a mailing with the kit enclosed.

We initially recruited 26 practices: one to serve as a pilot siteviii and the remaining 25 to be assigned to either the intervention or control arm. We assigned 20 practices to the intervention arm and 5 to the control arm; however, 5 intervention practices dropped out after assignment but before the start of the intervention, reducing the number of intervention practices to 15. We discuss this loss of intervention practices further below under Adoption. 

Figure 3.2 presents the cluster allocation and randomization process, along with the number of practices and patients at each step in the process. We had more information for intervention patients (from the SEA form, undeliverable addresses, and opt-outs) than we did for control patients. To compensate, we "adjusted" the number of control patients to use as the denominator of screening rates when comparing intervention and control groups for several outcome evaluation analyses. This adjustment removed a proportional number of control patients as intervention patients excluded by this additional information.

To be included in the intervention evaluation study, practices and patients had to meet the inclusion/exclusion criteria discussed below.

Practice Eligibility Criteria

To be eligible to participate in the study, practices had to be:

  • Affiliated with the LVPHO and a member of EPICNet.
  • A primary care practice that treated adults of both genders (family or general internal medicine, but not pediatrics or obstetrics).
  • Located in either Lehigh County or Northampton County, Pennsylvania.

A total of 111 practices met these criteria.

In addition, all of the clinicians at a given practice had to consent to participate for that practice to be eligible for inclusion. This criterion was necessary because we could not uniquely associate patients of a practice with the specific clinicians who were their primary providers. Since the intervention called for mailing material to a practice's patients on behalf of their providers as part of their health care, and we wanted to ensure that their providers consented and gave us permission, all of the practice's clinicians had to consent.

We further purposively recruited practices to ensure an adequate mix by:

  • Affiliation (LVHN residency clinic, LVPG practice, MATLV practice, or unaffiliated independent practice).
  • Size (three or fewer clinicians or more than three clinicians).
  • Specialty (family practice or general internal medicine).
  • Location (urban, suburban, or rural).
  • Presence or absence of an electronic medical record (EMR) system.

Patient Eligibility Criteria

We identified initially eligible patients by reviewing electronic claims records of patients insured through the LVPHO and electronic billing records and EMR information available from participating practices. To be initially eligible to participate in the study, patients had to be:

  1. Current patients of participating practices (had a visit to a participating practice within the 2 years immediately preceding the start of the interventionix).
  2. Age eligible (ages 50 through 79).
  3. At average risk for developing CRC (not having had either a diagnosis of CRC or a personal or family history predisposing one to CRC).
  4. Free from other complicating colorectal or gastrointestinal conditions (e.g., Crohn's disease or ulcerative colitis).
  5. Not up to date on CRC screening (not having had a CRC stool test within the past year, a flexible sigmoidoscopy or double contrast barium enema x ray in the past 5 years, or a colonoscopy within the past 10 years).x

In addition, since the intervention involved mailing materials to patients, they had to have a valid mailing address in at least one of the electronic data sources we reviewed. Finally, since a commercial Blue Cross plan serving the Lehigh Valley implemented a CRC screening campaign for its covered patients, we excluded patients with this coverage from our pool of initially eligible patients. This exclusion avoided possible confounding of the SATIS-PHI/CRC intervention effect and the Blue Cross campaign effect.

We subsequently deemed initially eligible patients to be ineligible if:

  • We discovered evidence from subsequent electronic record reviews or chart audits that patients did not meet eligibility criteria.
  • Intervention patients responding to the SEA form indicated they were not eligible.
  • The SEA form was undeliverable.

Finally, we excluded eligible intervention group patients if they exercised the opt-out option on the SEA form or the invitation to be screened mailing was undeliverable.

Return to Contents 

Data Sources

We used six types of data to conduct the intervention assessment: electronic records, Health Network Laboratories (HNL) stool test reports, chart audits, the SEA form, a survey of participating practices, and focus groups and informal interviews. Data from several of these sources are integral to the actual implementation of the SATIS-PHI intervention and we used them for the implementation as well as for the assessment. We describe here how we used them for the assessment (Section 2 of this report describes how we used them for the implementation).

Electronic Records

We used electronic records to identify eligible patients and as one of three sources of data to track CRC screening and followup. Electronic records consisted of billing records, LVPHO claims, and EMRs. Since the PHO served as the central entity for the SATIS-PHI/CRC intervention, we had access to claims data for all 20 participating practices. This data source provided information on health care received by covered patients regardless of who the provider was (i.e., the care or service did not have to be provided by a participating practice to be included). However, the LVPHO insurance plans only cover a small proportion of Lehigh Valley patients and are employer-based plans.

To gain access to a wider range of patients for this intervention and its assessment, including Medicare and Medicaid patients and self-pay/uninsured patients, we supplemented PHO claims data with the two other electronic record sources. EMR data provided the most complete and clinically rich data but not all participating practices had EMR systems. Of those that did have them, there were several we were not able to access. If EMR data were not available to us, we used billing data. Billing data are the least informative source of electronic data. We could use billing data to identify potentially eligible patients but they were not useful for tracking screening and followup because the primary care practices did not bill for the CRC tests and diagnostic procedures we tracked. 

Table 3.1 presents the electronic data sources we were able to access and use for each of the practices participating in the intervention study and the number of PHO insured and other patients from each practice included in the study. Practices are identified with three-character codes. The first character identifies whether the practice is in the control (C) or an intervention (I) arm of the study. The second character identifies the affiliation of the practice (A for independent practices receiving practice business services from LVHN, H for LVPG practices, M for MATLV practices, S for unaffiliated independent practices, and U for LVHN hospital clinics). The third character identifies the order in which we recruited the practice. Discontinuities in third characters (e.g., IHA to IHD) indicate where initially recruited practices subsequently dropped out of the study after assignment to study arm.

Three control practices (CHA, CHB, and CUA) and seven intervention practices (IHA, IHE, IHF, IHH, IHI, IUA, and IUB) have the most complete data as we were able to use both PHO claims and EMR data for them. For the two remaining control practices (CMA and CSA) and five of the remaining eight intervention practices (IMB, IMD, ISB, ISC, and ISD), we were only able to use claims data. Thus, we were only able to include PHO patients from those practices in the study. The remaining three intervention practices (IAA, IHD, and ISA) have the least complete data as we were only able to use their claims and billing records.

As we describe below, we used clinical laboratory reports of stool tests from HNL to track stool test screenings from all practices; thus, incomplete electronic record data from some of the practices is not problematic for stool tests. However, it is problematic for both colonoscopy screens and diagnostic followups. Therefore, we omit practices IAA, IHD, and ISA, which have the most incomplete colonoscopy data, from analyses of colonoscopy intervention effects.

Table 3.1 also illustrates the difficulty we experienced identifying patient eligibility. After using the available electronic records to identify patients with visits to participating practices within the previous 2 years, we sorted the records by age to identify those who were age eligible. All of the electronic data sources were useful for these two initial sorts. However, identifying eligibility status for the average risk, complicating condition, and screening history criteria was more problematic, especially when EMR data were not available. We also could not identify PHO insurance status for practice ISA; we only know that we included both PHO-insured and other patients from that practice in the study.

Once we established current patient status and age eligibility, we considered patients to be eligible unless we found disqualifying evidence for them. Thus, inability to identify patients who should have been disqualified as ineligible is a greater problem than declaring eligible patients to be ineligible based on the available electronic records data. This problem causes the denominators for our screening rates to be overestimated, resulting in rates being underestimated. Since this condition varies between practices based on available data sources, we include data source as a statistical control variable in various outcome analyses.

HNL Stool Test Reports

As part of the study's agreement and protocol with Health Network Laboratories, the lab would not only serve as the supplier and processor of FIT tests, but would also provide test results to LVPHO study personnel and to the test ordering clinician.xi LVPHO study personnel would then record the test and results in the deidentified master patient database for analysis.

Chart Audits

For tracking the screening of study-eligible patients, as well as further determining eligibility, we supplemented information gained from electronic record reviews and HLN with chart audits for a sample of patients (the toolkit accompanying this report includes the chart audit form). This was a labor-intensive data collection requiring us to actively read charts for evidence of screening or eligibility. We followed a chart audit protocol developed by the study team in conducting the audits.

Study staff at LVPHO arranged to access a sample of charts of study patients from intervention and control practices. Some of these charts were electronic (from one of the several EMR systems used by practices participating in the study) and others were on paper. Regardless of medium, study staff read the relevant portions of the charts looking for evidence either of ineligibility or screening and followup. 

Within the study's resource limits of time and money to devote to chart audits, we set a target of charts to audit for each practice (Table 3.2). As described above, we targeted a 100 percent sample for practices with close to 50 or fewer study patients (resulting from only being able to access limited electronic data to identify potential eligibles). We targeted a 6 percent sample of study patients at all other intervention practices and an 8 percent sample of study patients at all other control practices.

We targeted a somewhat higher sampling percentage for control practices to help compensate for having less complete data for them from some other sources. To compensate for not being able to access a sampled chart due to unavailability during the time period allotted to chart audits, we drew samples double the target at practices not being sampled at 100 percent (e.g., 12 percent for intervention practices and 16 percent for control practices). We further decided that as long as we had drawn these extra charts, we would conduct audits somewhat above the set targets for at least some of the practices if they could be done within the study's time and funding limitations. Thus, we had some audit completion rates above 100 percent.

Table 3.2 presents the chart audit completion rates for intervention and control practices. With the exception of two intervention practices (IAA and IMB) and one control practice (CSA), all completion rates were above 90 percent, and those for nine practices exceeded 100 percent. We conducted 849 audits, of which 96 resulted in identifying seemingly eligible patients who were, in fact, not eligible. From the remaining audits, we identified 28 additional screened patients (23 intervention patients and 5 control patients) whose screening was not detected through the other two data sources.

SEA Form

The primary purpose of the Screening Eligibility Assessment (SEA) form was to be a source of data for determining patient eligibility (the toolkit accompanying this report includes the SEA form). Section A of the survey asked patients whether they considered themselves to be ineligible for the intervention study based on age, being up to date on CRC screening, having had a previous CRC diagnosis, or not being a patient of the practice. In addition to this primary purpose, we used the SEA form to gather supplementary information about patients (race, ethnicity, marital status, language spoken, education, and perceived health status) not consistently available from other sources. We also used the form to allow patients to opt out of the intervention study. 

Table 3.3 presents the response rate to the mailed SEA form by participating intervention practice (the SEA form was not mailed to control practice patients). The overall response rate was 27.9 percent (2,810 responses out of 10,063 forms mailed). Although the central entity mailed the forms following the same protocol to all the practices, response rates nevertheless varied considerably among practices. Response rates ranged from a low of 21.4 percent, 21.8 percent, and 21.9 percent among patients of practice IAA, IUA, and IUB, respectively, to a high of 45.0 percent and 66.7 percent among patients of practice ISA and IMD, respectively. We were unable to account for this wide variation. Based on responses to the SEA form, we eliminated 1,342 patients as ineligible; an additional 300 opted out.

The SEA form also provided supplementary data for 1,131 eligible intervention practice patients out of a total of 7,965 such patients included in the study, or a response rate for eligible patients of 14.2 percent. This rate varied by practice from a low of 0.0 percent and 8.3 percent among patients of practices ISC and IAA, respectively, to a high of 22.9 percent and 52.6 percent among patients of practice IMB and IMD, respectively.

Survey of Practice Providers and Staff

We fielded a survey to all practice providers and staff (both clinical and nonclinical) at the intervention and control practices to ascertain prevailing beliefs and behaviors regarding CRC screening and followup. We conducted these surveys both pre- and postintervention for the intervention practices, and once in the control practices (the toolkit accompanying this report includes the survey form; we used the same form for the pre, post, and control surveys). We administered the preintervention practice survey of providers and staff in the intervention practices to gather data regarding the current CRC screening environment at each practice. We then administered the survey again postintervention to ascertain changes in behavior or attitudes resulting from the intervention.

In addition, we distributed the survey in the control practices late in the intervention period to gather comparison information similar to the baseline information gathered from intervention practices. All surveys were completely anonymous and were assigned a unique random ID by practice. Due to IRB restrictions and the anonymous nature of the survey, we were not able to link the pre and post responses by individual respondents.

To collect the preintervention information, we sent the practice survey to practice administrators for distribution to all clinical and nonclinical staff in the practice prior to the academic detailing sessions and focus groups at the intervention practices. We then collected all practice surveys prior to the start of the academic detailing session and focus groups. We hoped that by distributing and collecting the survey prior to the academic detailing sessions and focus groups, we would minimize response bias (as this was a baseline assessment of current practices). To obtain the postintervention information, we sent the practice survey to the practice administrators prior to the final debrief session. We collected the completed surveys prior to the start of the debrief, when the postintervention focus groups were conducted.

For the control practices, we also distributed the practice surveys to the practice administrators and collected them prior to the start of the focus group. We did not conduct any debrief sessions with the control practices, as we plan to disseminate the toolkit and intervention materials to them at the conclusion of the study.

The content of the practice survey included the exact same questions for the preintervention, postintervention, and control data collections. The survey included the following topics:

  • What types of screening modalities do clinicians recommend?
  • What types of screening modalities do clinicians believe are effective?
  • What types of followup do clinicians recommend to positive screenings?
  • Who performs the various steps of the screening process within the practice?

Topics one through three were for clinicians and physicians only, while topic four was for physicians, other clinicians, and all other staff. The final section of the survey included demographic questions for all respondents. 

The number of responses and response rates we received to the practice survey varied by practice and by the survey group (i.e., preintervention, postintervention, and control). Tables 3.4, 3.5, and 3.6 show the number of surveys distributed and received for the pre- and postintervention surveys and the number of surveys received for the control practice surveys. Overall, across the intervention practices for the preintervention survey, we received 205 completed surveys for a 71.9 percent response rate. For the postintervention survey, we received 135 completed surveys for a 47.4 percent response rate. For the control practices, we received 45 completed surveys. For the postintervention and control practice surveys, when we saw the lower than expected response rates, we attempted to recontact the practices to obtain additional completed surveys. However, we were not able to increase our response rates.

Focus Groups and Informal Interviews

We conducted the focus groups both before the intervention and after the intervention at each of the 15 intervention practices and at one time near the end of the intervention period at each of the 5 control practices. The intended populations for the focus groups were all the providers and clinical and nonclinical staff of each practice. As with the practice survey, there was no sampling or selection process; we invited all providers and staff. We obtained informed consent from all focus group participants.

We conducted preintervention focus groups during the academic detailing sessions, but prior to when the detailing actually began to ensure accurate baseline information (that would be unaffected by the information disseminated during the detailing). The purpose of these focus groups was to collect information to establish a baseline. We conducted postintervention focus groups during debrief sessions that occurred at each intervention practice at the end of the intervention period. The postintervention focus groups assessed satisfaction with the intervention and identified changes in attitudes and behaviors regarding screening and followup. They also identified changes in management of normal and abnormal screening tests resulting from the intervention.

We also conducted focus groups at the control practices late in the intervention period using the preintervention focus group guide to gather information similar to the baseline information gathered from intervention practices. We conducted them late in the intervention period to avoid introducing any information that could influence the control practices usual CRC screening or followup practices.

We used the same focus group guide for the preintervention and control practices in order to obtain baseline information. We asked participants the following types of questions:

  • What screening guidelines do they use?
  • How aware do they think their patients are regarding the importance of CRC screening?
  • How often and when do they recommend CRC screening to patients?
  • How can CRC screening and tracking be improved.

For the postintervention focus groups, we asked participants the following types of questions:

  • How satisfied were they with the intervention?
  • How did they feel the intervention affected their practice and themselves?
  • How did they feel the intervention affected their patients?
  • What was it like to adopt the intervention?
  • What could have improved the intervention?

Attendance at each of the focus groups varied, depending on the practice.

Key Informant Interviews

We conducted brief key informant interviews with selected providers and staff at intervention practices to ascertain additional baseline information about procedures and systems for screening results. These interviews collected information from selected knowledgeable practice personnel who provided information related to the practice as a whole. The interviews also allowed us to obtain answers to questions that remained unanswered or unclear based on the data received from the focus groups and survey. We did not conduct postintervention key informant interviews with practice staff or interviews at the control practices, as we were able to collect all necessary information from these focus groups and surveys.

Interview topics included:

  • How are screening guidelines disseminated throughout the practice?
  • What are some of the practice policies and procedures for CRC screening?
  • How does the practice identify patients eligible for screening?
  • How does the practice use an EMR to track screening?
  • What are the patient demographics of the practice?
  • What types of insurance does the practice accept?
  • What are some other unique characteristics of the practice?

The number of interviews conducted during the preintervention period varied by practice.

Patient Focus Groups

We conducted postintervention patient focus groups to better understand the intervention from the patient's perspective. We conducted two focus groups with patients at two distinct sites, primarily patients who had received the intervention. We recruited patients based on all eligible patients from the site and obtained informed consent from all participants.

The focus group topics included the following four key areas:

  1. What was the patient perception and knowledge about CRC screening?
  2. What did the patient think about the intervention (what worked and what did not work well)?
  3. What were some patient motivators and barriers?
  4. What else could have been done to further the screening objective?

Eleven individuals participated in these two discussions. Ten had received screening as the result of the intervention; one had not.

Informal Conversation With LVHN

In addition to the practice focus groups, we conducted a conversation with study members of the LVHN to gather information about their impressions of the intervention and its implementation and outcome. During our informal conversation with LVHN project staff, we discussed the following topics:

  • How did the LVHN/LVPHO context affect the intervention?
  • How representative was this intervention to others that the network has participated in?
  • What aspects of the intervention worked well, and which did not work well?
  • Would they recommend introducing this intervention to the other practices in the network?
  • How would they describe the practices' levels of participation?

Three people participated in this informal discussion.

Return to Contents


Outcome Measures

The PRISM intervention evaluation framework has two types of overall outcome measures: the reach of the intervention into the target patient population and the effectiveness of the intervention. For this study, we defined reach as the number, proportion, and representativeness of eligible patients of intervention practices who participated in the study. For our purposes, participation means that the eligible patient did not opt out and had a valid (deliverable) mailing address that permitted us to mail intervention materials to them. We measured representativeness by comparing participating eligible intervention patients with those who opted out or had undeliverable mailing addresses. We also measured the representativeness of the study population by comparing the distribution of study patients of intervention and control practices to that of all LVPHO patients ages 50-79.

We identified and measured several kinds of intervention effects. SATIS-PHI/CRC seeks to improve patient screening and followup both through encouraging and facilitating patients to become screened and through academic detailing to providers and other practice staff regarding evidence-based screening guidelines (as shown in Figure 1.2). We sought to assess the effect of the intervention on these outcomes.

We measured the effect of the intervention on patient screening by comparing the rate and likelihood (odds) of intervention patients being screened to that of control patients being screened during an 8-month observation period. The observation period followed the mailing of the invitation to be screened letter and accompanying screening information and materials. We separately compared each intervention study arm (receiving a card to request a stool test kit and receiving the stool test directly without having to request it) to the control arm for being screened by stool test, by colonoscopy, and by any modality. We also compared the two intervention arms to each other within the two practices with randomized patients.

We measured variation in effect size among intervention patients classified by several individual and practice-level attributes to assess what kinds of patients the intervention was most likely able to affect. In addition, we estimated the effect size comparing intervention and control patients adjusting for the effects of these attributes to assess whether the intervention effect persisted even after controlling for them. We corrected for possible clustering effects of assigning patients to the intervention and control study arms by practice and also estimated effect size adjusting for several sources of possible measurement error. In particular, we adjusted the screening rate denominator for control practice patients in several analyses to compensate for not having eligibility data from the SEA form for them. We also sought to assess the possible impact of incomplete screening tracking data on effect size for screening.

We conducted practice-level analyses, examining variation in screening rates by practice, to assess both the degree of variation and whether intervention practices generally had higher rates than control practices. We looked at whether there was more variation between groups (intervention practices vs. control practices) than within groups. Finally, we assessed possible negative consequences of patient screening by examining the results of diagnostic colonoscopies done as a followup to positive or abnormal stool test screens.

In addition to the intervention's effect on patient screening, we assessed its effect on followup of positive screens. For purposes of this assessment, we defined followup as performing a guideline-consistent complete diagnostic evaluation (CDE) in response to a positive or abnormal stool test. As there were only a few positive stool tests within the study population during the observation period, we were only able to do a minimal analysis of this effect.

We measured the effect of the academic detailing portion of the intervention on providers and practices by comparing results of the preintervention and postintervention survey of intervention practices. Since we only conducted a preintervention survey of control practices, we could not assess pre-post differences within these practices nor compare them with pre-post differences in intervention practices. We defined a positive intervention effect to be a movement in responses away from beliefs and behaviors not supported by guidelines and toward those that are guideline supported. In particular, we assessed changes in the proportion of clinician respondents who recommended nonsupported screening modalities to their patients, believed that nonsupported modalities were effective, and followed up positive stool tests and flexible sigmoidoscopies with nonsupported procedures.

We also assessed the impact of the intervention on clinician attitudes toward fecal immunochemical tests (FITs), which were one of the two screening modalities offered to patients through the intervention. We examined changes in pre-post responses to recommending the FIT and believing it to be effective. In addition, we compared pre- and postintervention responses to evaluate any impact of the intervention on whether various steps in the screening process for stool tests and colonoscopy were performed within intervention practices.


Patient and Practice Attributes

We sought to understand how outcome screening rates varied by patient and practice attributes. We also sought to determine whether and how these attributes affect intervention effect size for screening rates. Patient attribute data came from the electronic records we reviewed to determine eligibility and from responses to the SEA form (for those intervention patients who returned an SEA form with this information; control patients did not receive an SEA form). Electronic records provided age (date of birth), gender, and primary insurance coverage.xii The SEA form provided marital status,xiii perceived health status, and education.xiv

We calculated age as of the start of the intervention from date of birth and then coded the result into a series of age categories. We coded insurance coverage into the categories of commercial, Medicare, Medicaid, and self-pay/uninsured. The data from electronic records were relatively complete. Out of 7,965 intervention patients, only 343 cases (4.3 percent), 2 cases (0.03 percent), and 683 cases (8.6 percent) were missing data for gender, age, and insurance coverage, respectively.

For the 2,662 control patients, no cases were missing gender data and only 19 cases (0.7 percent) were missing age data; however, all 2,662 were missing insurance data. Thus, we could not include insurance coverage in analyses involving control patients. Data from the SEA form, on the other hand, were much less complete (and nonexistent for control patients). Marital status data, even after supplementation from electronic records, was available for only 48.4 percent of intervention patients. Perceived health status and education were only available for 13.7 percent.

LVHN study staff provided practice attribute data for the LVPHO/EPICNet practices participating as either intervention or control practices in SATIS-PHI/CRC. In allocating practices to intervention or control study arms, we used five attributes:

  1. Size (number of clinicians dichotomized as small if three or fewer and large if more than three).
  2. Affiliation (affiliated with LVHN as either a hospital clinic or LVPG practice or not affiliated with LVHN and either part of MATLV or totally independent).
  3. Specialty (family medicine or general internal medicine).
  4. Location (urban, suburban, or rural).
  5. EMR (having or not having an EMR system).

We used the same attributes in outcome analyses with the exception of EMR. The LVPHO central entity performed the SATIS-PHI/CRC function of identifying, contacting, and tracking eligible patients. Thus, the presence or absence of an EMR system in a practice should not have affected the rate at which its patients would be screened in response to the intervention. Still, our ability to access and use an EMR system for this function—as well as availability of other electronic data sources for this function—was likely to affect our how accurately we assessed eligibility. It also could affect our ability to find evidence of screening and thus affect our ability to accurately measure screening rates. Therefore, we modified the EMR attribute variable to indicate the relative completeness of the electronic data available for a particular practice.

As shown in Table 3.1, for some practices we were able to use a combination of EMR and PHO insurance claims data. This combination provided the most complete data for patients identified as eligible in these practices. For other practices, we were only able to use PHO claims data. This single data source restricted us to patients who were insured through the PHO. The lack of EMR data restricted our ability to observe any clinical evidence of ineligibility; thus, we likely included a sizable proportion of ineligible patients in the denominator of screening rates for patients of these practices. However, the PHO claims were a good source of data for tracking screening. Accordingly, we coded these practices as having moderately complete data.

For the remaining practices, we were able to use a combination of billing records and PHO claims. The billing records allowed us to include more than PHO-insured patients but did not provide good tracking data for them. Therefore, we coded these practices as having the least complete data.

viii The pilot was solely used to gain experience with the intervention and to identify weaknesses in our implementation plan that could be addressed before the full intervention. We reported our experience, lessons learned, and findings in our Preliminary Report of Findings, submitted to AHRQ in September 2009.
ix We defined the start of the intervention for patients to be the mailing of the introductory letter and SEA form. If a patient had a visit to more than one participating practice in the previous 2 years, we associated that patient with the practice visited most recently.
x We based the age, average risk, complicating condition, and screening history criteria on recent CRC screening guidelines (U.S. Preventive Services Task Force, 2008; Levin, et al., 2008).
xi Patient instructions accompanying the FIT stool test kit directed patients to return the competed kit to their primary care physician's practice, which would then write an order from the patient's physician to the lab to process the kit. HNL then reported the test result to the ordering clinician and periodically sent study personnel affiliated with LVPHO a list of patients tested and their results. HNL also provided information to study personnel on patients of control practices.
xii Some electronic records also contained fields for race and ethnicity; however, the data were incomplete and race and ethnicity categories were inconsistent across different record systems and with those used on the SEA form (which used OMB-approved categories). Thus, we decided not to use these data elements in any analyses.
xiii We were able to use electronic records to add marital status data for some patients not returning an SEA form.
xiv In addition to these three variables, the SEA form requested information on ethnicity, race, and primary language. We did not use these variables in any analyses because many SEA respondents did not provide the information. Of those who did, the overwhelming majority was non-Hispanic and spoke English. Thus, there was neither sufficient data nor sufficient variation for these variables to warrant their inclusion.

Return to Contents
Proceed to Next Section

Page last reviewed October 2014
Page originally created September 2012
Internet Citation: 3. Assessment Plan and Methodology. Content last reviewed October 2014. Agency for Healthcare Research and Quality, Rockville, MD.