Chapter 3. Inputs to Designing the CDSMP Evaluation

Design and Evaluation of Three Administration on Aging (AoA) Programs: Chronic Disease Self-Management Program Evaluation Design—Final Evaluation Design Report (continued)

The design team carried out a number of activities in order to inform the evaluation design. These activities included: A literature review, a review of gray literature and data sources, and conference calls with CDSMP representatives, review of program data from the CDSMP technical assistance contractor, and a conference call with the Technical Expert Panel. Below, we describe these activities along with implications for the evaluation design.

A. Review of CDSMP Studies, Gray Literature, and Data Sources

A literature review of CDSMP evaluation studies was completed in Fall 2010 to inform this research design. The primary sources of literature were PubMed/MEDLINE and EBSCOHost, but also included the reference list of the Centers for Disease Control and Prevention (CDC) meta‐analysis and other materials provided by AoA. Articles in three primary areas were reviewed: 1) program evaluation methodology; 2) outcome variables and the tools used to collect and measure them; and 3) program characteristics. Overall, 44 peer-reviewed articles were abstracted. Key findings from this review are summarized below.

A.1 Results of Past CDSMP Evaluations

Overall, the studies reviewed for this report provide evidence supporting the utility of the CDSMP and similar self‐management programs in improving self‐efficacy, health status, and health behaviors. In addition, while fewer studies investigated the effects on health care utilization, those that did found significant reductions in physician visits and hospital stay duration, suggesting that savings to health care financing programs such as Medicare and Medicaid may be possible. However, there are few subgroup analyses of the population AoA is mandated to serve, that is, people aged 60 or older, despite the group's participation in the randomized controlled trials of the intervention. The few studies that looked at the effects of CDSMP on older individuals either did not find positive effects or found only weak effects. Findings described below are for the general age group.

Though the 2001 study by Lorig and colleagues is one of few to report health and other outcomes for up to two years (most reported just 4-, 6-, or 12-month outcomes), this study did not have a formal control group after the first 6 months of the study. Because, after 6 months, the wait-listed control participants were offered the CDSMP intervention. Smeulders et al. (2009) found that 6-month findings on health behaviors and health care utilization were no longer present at 12 months, suggesting that, to understand the long-term effects of the program, follow-up data collection should be extended beyond 6 months. The ethical and practical considerations regarding how long the CDSMP intervention can be withheld is one that AoA and the national evaluation contractor will have to grapple with. It may be more realistic to follow the Lorig model by offering the CDSMP intervention to control participants after the first 6 months and tracking their outcomes over time.

Studies that tracked health care utilization found statistically significant results after 6 months (e.g., Ahmed & Villagra, 2006), 10 months (Ahmed & Villagra, 2006), 12 months (Ersek et al., 2008; Lorig, Ritter, & Jacquez, 2005; Lorig et al., 2001), and 2 years (Lorig et al., 2001). However, these studies did not use a specific instrument to measure health care utilization. Rather, participants self-reported health care utilization on questionnaires, which were then checked against participants' clinical records. One study relied on claims data to analyze CDSMP effects on utilization.

Some studies reported decreases in health services utilization (e.g., inpatient hospital use) and potentially reduced costs for CDSMP participants; however, these studies have several limitations, including the reliance on participant self‐reports. Self-reporting of health care utilization should be viewed with caution because they vary in accuracy. According to Bhandari and Wagner (2006), several factors affect the accuracy of self-reported utilization information, including sample population and cognitive abilities, recall time frame, and type of utilization of interest.

Many studies reviewed and analyzed participant demographics such as age, sex, and race/ethnicity to determine program effects by demographic variable. Though most studies sampled Caucasian and female CDSMP participants, several studies specifically targeted Hispanic participants (two studies) and rural African American older adults (one study). Similarly, most studies reported age as a sample descriptor, rather than age in relation to outcomes.

Findings were mixed with regard to whether disease‐specific or generic CDSMPs result in better outcomes for participants. Since AoA funds have mostly been used to implement the generic CDSMP, we recommend that the national evaluation focus on generic CDSMPs.

A.2 Strategies Used to Evaluate CDSMP Programs

Two main methodological techniques have been used to evaluate CDSMP programs: pre-post test design and randomized controlled trials (RCTs). We recommend an RCT design. In this subsection, we provide a description of both approaches, and our rationale for recommending the RCT design. 

A.2.1 Randomized Controlled Trial (RCT) Design Approach

An RCT design is the most rigorous approach to evaluating outcomes of CDSMP participants. In this approach, participants are randomized into treatment or control groups. Participants in both groups are surveyed at baseline and follow-up periods. Control group participants are usually placed on wait lists and receive the intervention at a later date. Populations included in these trials have ranged from persons with chronic disease in general to those with specific chronic diseases such as diabetes, stroke, heart disease, or inflammatory bowel disease. Several RCTs have also been conducted on programs similar to or adapted from the CDSMP intervention model. Adaptations were made to accommodate the method of identifying participant eligibility (e.g., self-report rather than physician diagnosis) or for use as a disease-specific program, such as hypertension. 

The advantages of this approach for the evaluation contractor are that RCT will be the most rigorous approach possible and that all participants will eventually have the opportunity to receive program training after a 6-month waiting period. However, the evaluation contractor will need to justify, to a review board, the practice of denying a widely available intervention with significant benefits, and may determine the maximum wait time based on input from stakeholders and program experts. Furthermore, to find statistically significant results, a minimum number of CDSMP applicants should agree to be randomized and waitlisted if assigned to the control group. Willingness of CSMP applicants to accept randomization is unknown. There should also be a robust mechanism to keep track of people assigned to the control group and to prevent them from taking the workshop throughout the specified intervention period. Still, an RCT design approach is an extremely reliable form of scientific evidence in the hierarchy of evidence that influences health care policy and practice, because RCTs minimize spurious causality and bias.

A.2.2 Pre-Post Test Design Approach

Ten of the 25 studies examined for the literature review employed a pre-post design approach. This technique is a non-randomized intervention design, with no control group, that assesses outcome measures at baseline and again at various intervals after the implementation of the self-management program intervention. This type of longitudinal design compares changes in outcomes over time, but it is limited in the inference of causality since the sample is neither randomized nor contains a control group. Rather, participants in the study constitute the intervention group, all of whom are followed before and after the CDSMP. Lack of a robust comparison group is a weakness of this approach.

Each pre-post study tracked participants and administered follow-up questionnaires after the intervention of the CDSMP. While some studies followed up with participants only once, at 4–6 months post-intervention, other studies utilized multiple data collection intervals to evaluate the self-management program—in these cases, data collection occurred at baseline, 4–6 months post-intervention, and 12 months post-intervention. Two reviewed studies tracked the participants up to 2 years following the onset of intervention. 

This approach may be considered if an RCT design is determined to be infeasible. This approach avoids the ethical implications of withholding treatment from participants, potential contamination in the control sample, and losing participants who are unwilling to be randomized. It would also allow the evaluation contractor to include sites with extremely small CDSMP programs that have trouble with recruitment. Finally, CDSMPs have program quotas that they must meet and report to AoA in order to maintain funding, and the use of a pre-post design would not pose any additional challenge or difficulty to sites to make their quotas for each reporting period. However, it is likely that AoA will relax quotas to accommodate study participation. However, despite these advantages, the analytical strengths of the RCT approach still outweigh the flexibility of the pre-post design approach.

A.2.3 Randomization and Sampling Strategies

Randomization occurred after baseline data collection, often using a blinded randomizer strategy. Five of the CDSMP trials used a straightforward wait-list method for control group participants, such that all control group members had the option to enroll in a CDSMP after the study period, usually 6 months. Barlow and colleagues (2009) enhanced their treatment and wait-list control group design by adding a second, non-randomized control group consisting of individuals who explicitly reported disinterest in participating in the CDSMP, for whom baseline and follow-up data were collected and analyzed. This strategy creates a third group that may not traditionally participate in the program, but provides an interesting comparison group to those receiving the intervention. However, given the sample size requirements needed to generate statistically significant results, we do not recommend inclusion of this second control group in the evaluation study.

To ensure that a sufficient number of participants were selected into CDSMP treatment groups, many studies used specific randomization ratios, such as a 3:2 or 2:1 treatment-control ratio. Our recommendation for the evaluation contractor is for a 1:1 treatment-control ratio. We make this recommendation to minimize the sample sized required for a given level of statistical precision.

One study investigating outcomes of CDSMP participants across different ethnic groups used a stratified sampling approach before randomizing participants into treatment and control groups. In this design, eligible prospective participants were stratified based on spoken language and geographic area, and then randomization occurred at a 2:1 treatment-control ratio. 

There are three sampling strategies for the evaluation contractor to consider. Strategy 1 poses a minimal disruption to the host site. In this option, the site would obtain consent when the participant shows up for the first workshop. This is potentially infeasible because participants who have made the decision to take the workshop may be unwilling to be randomized once they arrive at the site. Even if participants agree to be randomized and end up in the control group, they may change their mind or take another workshop. This places a burden on the host site to convince participants to join the evaluation, to track those interested in the program, and to monitor whether the control group is contaminated.

Strategies 2 and 3 require the evaluation to make a significant contribution to the infrastructure development of the CDSM program. In Strategy 2, AoA would provide additional funds to participating host sites for an enrollment system that collects baseline information and obtains consent before the participant arrives at the workshop. In this approach, there would be a standard tracking and evaluation system across sites. Strategy 3 builds on Strategy 2 by increasing AoA funding also for recruitment activity and by incorporating an advanced marketing strategy, which may address the issue of sample size for the evaluation and insufficient number of participants for the host site. The reviewed literature includes several strategies for recruitment of participants, such as advertisements in doctors' offices, senior centers, and religious institutions, as well as public and radio service announcements.

The evaluation contractor should also take into account statewide enrollment systems. A statewide enrollment system occurs when there is a central location, such as a Web site or phone number, where states can enroll or recruit participants for the program. If a site is selected for participation in the evaluation from a state with a statewide enrollment system, the evaluation contractor should consider revising the recruitment strategy for the site and consider where baseline information is collected.

A.3 Outcomes of Interest

We recommend that the evaluation contractor study health status, health behavior, self-efficacy, quality of life (QOL) outcomes, including social/role activity limitations and psychological well-being, cognitive symptom management, and health care utilization and costs.

The literature reviewed by the design team demonstrates that investigators agree overwhelmingly that self‐efficacy is a critical concept to measure when evaluating a CDSMP program; scales developed by Lorig and colleagues (1999) have been widely used to measure this outcome. Self‐efficacy theory, as developed by Albert Bandura, states that high‐level self‐efficacy is a prerequisite to realizing self‐management goals, as well as critical in determining whether individuals will maintain or improve their health status (Bandura, 1989 as cited in Du & Yuan, 2010). For this reason, a participant's belief in his or her ability to manage the condition can act as a predictor of health outcomes.

Measures of health status and health behaviors are also common, with studies looking specifically at self‐rated health, degree of pain and discomfort, role limitations, time spent engaging in exercise and other indicators through the use of validated scales. One study included the validation of a measurement developed specifically for self‐management program outcome evaluation—the Health Education Impact Questionnaire (HeiQ), which was shown to have strong psychometric properties including validity (Osborne, Elsworth, & Whitfield, 2007).

To allow AoA to compare the findings of the national evaluation to previous CDSMP studies and to provide cost estimates for AoA's use, we recommend that the health, QOL, and self-efficacy measures be derived using instruments and methods used by Stanford University (refer to the literature review in Appendix B as well as http://patienteducation.stanford.edu/research).

The Stanford Patient Education Research Center has developed a Chronic Disease Sample Questionnaire (included in Appendix C), which is intended to be used as a mail survey to CDSMP participants. The survey questions above cover most of the course topics. The questionnaire includes the following scales, most of which have good psychometric properties:

  • Health status indicators: general self-rated health status; health distress; and measures of fatigue, shortness of breath, and pain.
  • Health behaviors: exercise scale.
  • QOL: social/role activity limitations.
  • Self-efficacy: 6-item chronic disease self-efficacy scale.

The questionnaire does not include measures of cognitive symptom management or disability status, which we would recommend be included in the national CDSMP survey. Item scales for each of these may be found on the Stanford Patient Education Research Center Web site.

Health care utilization has also been examined as an outcome, primarily through the use of self‐reported visits to physicians and emergency rooms, and the number and duration of hospital stays. One study relied on analysis of claims data from participants enrolled in a managed care organization (Ahmed & Villagra, 2006). In addition, the cost‐effectiveness of self‐management programs has been assessed to demonstrate potential reductions in health care utilization (Kennedy et al., 2007; Richardson et al., 2008). We recommend the evaluation contractor study health care utilization outcomes including physician visits, emergency department visits, and hospitalizations; and health care expenditures including costs to Medicare, Medicaid (if feasible, see below), and out-of-pocket costs. Since AoA's target population is older adults (60 years or older), we recommend that Medicare (and Medicaid, if feasible) data obtained for individuals 65 years of age or older. Medicaid enrollees of this age group will be dual eligibles.

Although the Stanford Chronic Disease Sample Questionnaire includes a four-item health care utilization scale, the design team recommends that utilization be measured via health care claims. Medicare and perhaps also Medicaid claims data will allow both utilization and costs for fee-for-service beneficiaries1 to be measured together, with perhaps more accuracy than the self-reported measures used in prior CDSMP studies. 

To construct a longitudinal data set of treatment and control participants that links self-reported health status and behavior indicators to utilization and expenditures, it will be necessary to obtain, from each study participant, identifying information beyond name and date of birth (which are items included in the Stanford Chronic Disease Sample Questionnaire). We would suggest that the CDSMP enrollment application and pre-consent process obtain name, social security number (SSN), Medicare health insurance claim number (HICN), Medicaid enrollee identifiers if available (SSN or state-assigned MSIS-ID), gender, and date of birth to ensure that adequate identifiers are available for linking individuals across time and data type (e.g., claim, survey). Medicare and other health records should be collected at least for the period 12-months prior to enrollment, as well as 6 and 12-months post enrollment. Details on data collection and data analysis methods are described in Chapter 5.

A.3.1 Data Limitations

As described earlier, CDSMPs are open to older adults with chronic conditions. According to NCOA, 27 percent of current CDSMP participants are below the age of 60 (NCOA, demographic report prepared in March 2011). Though self-reported health status data can be obtained and evaluated for this cohort of participants, the utilization and cost analyses will likely be limited to the Medicare, and perhaps also Medicaid, beneficiaries. With sufficient personal identifiers, the evaluation contractor can develop a longitudinal data set to track participation, health status, and physician and hospital utilization and related costs. This is particularly feasible for Medicare beneficiaries, but less so for dual-eligible Medicaid enrollees, for the reasons described below.

Several key parameters must be met in order to evaluate health-related utilization and costs for CDSMP treatment and control participants: (1) individual participants must be identifiable and linkable across time and datasets; (2) claims data must be available for the same time periods as the randomized assignment study; (3) privacy-protected data housed by CMS must be made available; and (4) sufficient time and resources must be applied to create and analyze the longitudinal data file. Although we are confident that a file can be constructed linking the treatment and control group participants to Medicare claims for the study period (presuming Medicare enrollment data are available for accurate linking), we are less confident that Medicaid claims can be linked in with sufficient accuracy and timeliness. Following are the currently available Medicaid data:

  • Medicaid Statistical Information System (MSIS) data are submitted by each state's Medicaid program and contain Medicaid enrollment and claims paid information. They can be obtained from CMS by fiscal quarter. However, each state has a different manner of identifying eligible beneficiaries and services reported, and how they are labeled in the MSIS files may vary by state. The state-specific anomalies (or nuances) limit the cost-effectiveness of their use and thus limit the utility of these data for the national CDSMP evaluation. Furthermore, there appears to be at least two years time lag between the Medicaid covered service and data availability. 
  • Medicaid Analytic eXtract (MAX) data are built from the MSIS and reflect the services used by Medicaid enrollees during a calendar year. The most recently available data are for calendar year 2008. The MAX data are desirable for this CDSMP evaluation, as they are analytic extracts that enable analyses of enrollment, utilization, and expenditures at the person level (Wenzlow, Schmitz, & Shepperson, 2008). However, the files require time for MSIS data to be validated and for the files to be built, cleaned, and made available; thus, the time lag may be too great for MAX data to be included in these analyses.

The long time lag makes it infeasible to utilize either MSIS or MAX in a study that is less than 6 years long. Another way to obtain Medicaid data—without long delays—is to identify a small number of states that are willing to directly provide Medicaid data. Due to significant inconsistencies across states, we recommend that only two states are identified. Note that these states will have to have robust CDSMP outreach to Medicaid enrollees and will have to have significant number of Medicaid enrollees taking CDSMP workshops. Through environmental scan and focus groups, the evaluator is recommended to assess states for robustness of their CDSMP's Medicaid focus, if any, and also get buy in from states to make their Medicaid data available. The design team recommends that the evaluation contractor weighs the strengths and potential limitations of various Medicaid data acquisition strategies to ultimately decide whether and how the evaluation should include analyses of Medicaid utilization and costs as a CDSMP outcome. We also recommend that some items regarding out-of-pocket costs to individual CDSMP participants be added to the self-reported survey questionnaire.

A.4 Dual Eligibles and CDSMP

Even though Medicaid data may not be feasible to utilize in the evaluation, there may still be ways, albeit limited, to assess the impact of CSDMP on some aspects of Medicaid through availability of Medicare data on dual eligibles. Dual eligibles include 9 million low-income elderly and disabled Medicare beneficiaries who qualify for coverage based on their low income. Dual eligibles account for 18% of Medicaid enrollees but 46% of Medicaid spending. The dual eligible population is also growing. Medicaid coverage rates for the community among the over 65 population increased from 7.6 percent in 1987 to 14.1 percent in 1996. The management of chronic conditions in this group is likely to result in substantial savings. Furthermore, health care reform (Affordable Care Act, 2010) stipulated a number of initiatives related to provision of CDSMP to dual eligibles. Medicaid covers important services and co-pays that Medicare limits or does not cover, such as long-term care. Copays that would be paid by Medicaid for doctor visits and hospitalizations are recorded in Medicare data and that may facilitate a limited evaluation of the impact of CDSMP on Medicaid. The evaluation contractor should research and assess such opportunities.

After determining the potential of the CDSMP program to reduce Medicaid costs among dual eligibles, we considered whether and to what extent states involve this population in the CDSMP. Our research including conference calls with a number of CDSMP grantees and analyzing CDSMP technical assistance data suggested that dual eligibles may not be robustly targeted yet. While it is unclear what proportion of CDSMP participants are Medicaid beneficiaries, there may be more data on this population in the near future. American Recovery and Reinvestment Act (ARRA) funding for CDSMPs requires that the State Medicaid Agency is involved in the development and implementation of the program (AoA, 2010). The new CDSMP programs are required to give special attention to serving low-income, minority and limited English speaking older adults, including Medicaid eligible individuals.

An online search of CDSMP programs and mention of the Medicaid population, found that states such as New York are beginning, while others (Maryland) intend, to include the population in later years of the program (NACDD, 2010). With their state data indicating that 5% of Medicaid chronic care population accounts for 50% of the Medicaid health care expenses (Goehring, 2010), Washington State now offers reimbursement for diabetes SMP and aims to provide CDSMP reimbursement for Medicaid.

According to a recent CDC brief, a few states are moving toward Medicaid reimbursement for CDSMP (Gordon & Galloway, 2010; www.healthyagingprograms.org). While this has been occurring on a relatively small scale to date, the brief reports that one state has Medicaid clinics specializing in asthma and diabetes, and these patients receive referrals to CDSMP programs. Another strategy has been to train Medicaid managers to run CDSMP programs within their clinics. A Partners in Care Foundation conference in 2010 argued from a Social Enterprise Reimbursement Model that once Medicaid accepts CDSMPs as a reimbursable benefit, they can cover the benefit under the Medicaid Waiver program. The state of Washington amended their Aged/Disabled Waiver to include provision of CDSMPs and California is pursuing a similar strategy. Through Oregon's "Living Well" program, CDSMP coaches refer Medicaid Fee-for- Service clients to programs in their area.

We conclude that the evaluation contractor should look at Medicaid outcomes if feasible. In the near future—when the evaluation contract is awarded—it is expected that states will be more heavily engaging the dual eligibles. Even though there does not yet seem to be wide-spread (i.e., nationwide) engagement of dual eligibles with CDSMP yet, there potentially are a few states which are far along. The evaluation contractor should identify such states and explain to them benefits of the evaluation and try to get their cooperation to provide Medicaid data directly to the evaluator. At the very least, the evaluator should identify viable ways of indirectly looking at Medicaid outcomes through Medicare data for dual eligibles.

Return to Contents

B. Conference Calls with CDSMP Representatives

The research team facilitated conference calls in early March 2011 with "on-the-ground" CDSMP program representatives to gain a better understanding of the program's organizational and financial structure, participant tracking and data management systems, and marketing efforts. AoA program staff identified three target states (Illinois, Michigan, and North Carolina), and contacted potential respondents in those states to brief them about the purpose of the calls and to request their participation. In total, 11 respondents participated in six conference calls that included representatives of state grantee agencies and their partner agencies, and representatives from host sites such as coordination directors, master trainers, and workshop leaders. A fuller summary of the interviews is presented in Appendix D.

The grantee in all three participating states is either the State Department on Aging or the Department of Public Health; however, these two agencies work closely to administer the CDSMP. The primary CDSMP funding in all three states currently comes from ARRA grants supplemented by Title III-D awards, which are made directly to the AAAs. In addition, state-specific sources of support include local funds, in-kind contributions (e.g., staff time), funding from AoA's Sustaining Evidence-Based Health Promotion Programs, and support from organizations such as the CDC and diabetes and arthritis associations. The program is not supported in any way by clients' contributions: there are no client cost-sharing plans or fees to participate in the program. Concern was frequently expressed regarding future funding and consequently the sustainability of the program. All states expressed the need for additional funding to expand recruitment, including a focus on gaining provider buy-in to increase referrals to the program. In general, the respondents believe there is a need to reach a critical mass of programs and regions in order to develop a stronger cross-referral system.

The AAAs serve as the primary CDSMP host sites and receive funding directly from the State CDSMP grantees. The amount of funding that an AAA receives is based on the projected number of workshops and participants expected to be served. Some AAAs also use Title III-D funding to support the CDSMP, although they are not required to do so by the grant. 

The host site is responsible for all aspects of program coordination including leader training, recruitment, materials, and coordinating workshops. In some states, the AAA subcontracts to other organizations to serve as additional coordination sites. For example, in Illinois, Rush University, White Crane Wellness Center, University of Illinois, and the Affordable Assisted Living Coalition are subcontractors to the AAA and are responsible for conducting trainings and/or hosting workshops in their areas. A variety of entities serve as implementation sites including senior centers, home health agencies, libraries, hospitals, clinics, and some AAAs. Given that the host site is the lowest common entity managing the program, we propose host site as the evaluation site. AoA and the evaluator will recruit host sites for study participation possibly through respective states. Then, the evaluation contractor will work with the recruited AAAs to set up and run the evaluation at local implementation sites.

Reaching workshop capacity can be relatively easy or extremely difficult, depending on the location of the workshop, and the season of the year in which it is held. For example, rural AAA catchment areas tend to cover wider geographic areas than urban catchment areas; it is therefore more difficult to recruit participants, as well as workshop leaders, because they have to travel longer distances to reach an implementation site. Moreover, compounding the difficulty of older adults and persons with disabilities to travel to a workshop site is the fact that services, such as transportation, often are limited in rural areas. In addition, it is also difficult to recruit and sustain a full class during the winter months, when many older adults and persons with disabilities find it difficult to travel. Furthermore, the sites described their older adults as "snow birds," who travel south for the winter.

These findings have direct implications for the evaluation design and should be taken into consideration by the evaluation contractor when planning the study's sampling strategy. To maximize the sample size, oversampling from urban areas or omitting extremely rural catchment areas (e.g., frontier catchment areas) from the sampling frame should be considered. In addition, planning around the winter months should be taken into consideration, although this will be a challenge since the proposed evaluation plan suggests a 6-month treatment/control group study, thus encompassing all 12 months of the year. If the time frame allows, one suggestion would be to include only participants who do not anticipate traveling south during the winter, and to start the study in the early spring so that the control group would be starting in the fall, before the harshest winter months prohibit safe travel. Another method for increasing sample size is to expand program recruitment both directly through face-to-face presentations, the distribution of flyers and brochures, and television and radio advertisements; and, indirectly by obtaining the buy-in of providers as a way to promote referral to the program. As previously discussed, all states included in the conference calls reported the need to expand their recruitment efforts, but noted that additional funding would be necessary to do so. To ensure an adequate sample size, AoA should consider funding additional recruitment efforts at selected study sites.

Enrollees have to register for the workshops, but there is no application process or eligibility screening for enrolling in the CDSMP; all potential participants are eligible to enroll. At some host sites, when participants call to register, they are given detailed information about the program, including the expectation of their attendance at six workshop sessions. This process allows potential enrollees to self-screen in or out of the program before registering or attending any of the sessions. Interviewees described a higher participant retention rate in catchment areas where this process has been implemented. However, the process of informing potential enrollees about the program has not been implemented consistently, even within host sites.

This finding also has implications for the evaluation design. In order to assign participants to treatment or control groups, there has to be a point, prior to attending the first session, at which all potential participants contact the host (or implementation) site. It is feasible that at study sites, all potential enrollees would be processed through the host site (regardless of their first point of contact). To facilitate this, it may be necessary for AoA to provide additional funding so that study sites can conduct a systematic eligibility screening for the purpose of recruiting participants and assigning them to treatment or control groups. We propose developing a "Participant Tracking System" (PTS) for the host sites to facilitate, among other things, uniform enrollment across sites, randomization, and tracking participation of enrollees and non-enrollees. PTS is described in detail in Chapter 5.

The states that participated in the design team interviews follow the AoA mandate that defines a CDSMP "completer" as an enrollee who attends four of the six workshop sessions. Overall, the average program completion rate of 75 percent (range = 72–77 percent) across the three participating states was relatively high, considering that the program serves a chronic disease population. Several reasons were given by participants for dropping out of the program including health-related issues, inclement weather, transportation barriers, and lack of initial understanding that the program is a full six weeks, not a drop-in series of classes. As noted above, systematically administering an eligibility screening prior to enrollment may help to decrease the dropout rate.

On occasion, potential CDSMP enrollees have had to be put on wait lists for the next available workshop. This generally occurs in large urban areas, but for the reasons discussed above, it can also occur in rural areas. There are two primary reasons for generating a wait list. First, on rare occasions, it is difficult to engage a workshop leader. In rural areas, the distance that they have to travel to the implementation site at least once a week for six weeks can be a difficult challenge to overcome. In urban areas, they may be stretched too thin; that is, there are not enough leaders for the number of workshops necessary to serve the number of enrollees. Second, the number of clients who enroll at one time may be too many for one workshop, but not enough for a second workshop to begin concurrently. As noted previously, filling classes can be very easy or extremely difficult, and getting participants to stay committed is also a challenge. At least one state reported that the wait for program participation can be 6 months or longer.

This finding is very useful for the evaluation design since we have proposed that the design include a 6-month wait-list control group. Most of the host sites in the states that participated in the design team interviews have found it necessary to utilize a wait list on occasion, and most respondents agreed that 6 months would not be an unreasonable amount of time for potential enrollees to wait for the next available program. However, one respondent was concerned that given the chronic health condition of CDSMP enrollees, it is likely that a number of participants would be lost over the 6-month waiting period.

In all states, participants are tracked for attendance, and leaders are monitored for program fidelity. Host sites send enrollment figures and attendance records to the state so that the number of participant completions and the number of workshops held can be tracked. During the first workshop session attended, all enrollees are required to fill out the standardized NCOA enrollment form, which requests demographic information including gender, ethnic background, and geographic region, as well as information on enrollees' chronic condition(s). Enrollees also fill out a brief exit form that asks which, if any, of the tools that they obtained during the workshop they expect to use in the future. Enrollment and exit data are collected via paper and pencil, and forwarded (depending on the state) to the state grantee agency, the host site, or the program evaluator, where they are electronically entered into the NCOA database. In all states, hard copy data are retained for a minimum of 1 year, and in one state the data are entered into an Access database and retained by the state. All data are reflected in a national database maintained by a Seattle-based subcontractor of NCOA. The NCOA database is de-identified.

Since workshop leaders already administer data collection forms to all program enrollees, it is possible to request that they also administer the national evaluation survey prior to beginning the first workshop. However, since many workshop leaders are volunteers or serve as a leader as part of their agency position, asking them to collect these additional data may be too burdensome. Another option is to ask workshop leaders to distribute packets that contain a fact sheet about the study, an informed consent form, and the survey. Time might be allotted at the start of the first session for enrollees to complete the pre-implementation survey. The third option is mail the packets to participants who were screened for eligibility during registration, as discussed above.  

None of the states interviewed have an online CDSMP, but they indicated an interest in piloting one if additional funds were made available. The evaluation contractor might consider a small sub-study that tests the validity of the online CDSMP. Provided that enrollees have access to a computer, several of the issues that result in their dropping out of the program would be alleviated through the online program, such as some health-related issues, inclement weather, transportation barriers, and the unwillingness to attend at least four of the six sessions in person.

Lastly, interviews with state and local CDSMP representatives confirmed our gray literature findings on the extent of states' current outreach to Medicaid enrollees. None of the interviewees tracked or knew the proportion of Medicaid beneficiaries participating in CDSMP. However, some of them mentioned current and planned initiatives such as involving Medicaid partners, recruiting Medicaid enrollees, and trying to integrate CDSMP into Medicaid.

Return to Contents

C. Data from the CDSMP Technical Assistance Contractor

Recommendations in this evaluation design are informed from conference calls and data from the Technical Assistance (TA) contractor, the National Council on Aging (NCOA). NCOA has a cooperative agreement with AoA to provide TA to CDSMP grantees and other evidence based programs. NCOA administers a grantee data collection platform via www.salesforce.com and uses www.healthyagingprograms.org as a document sharing tool. The IMPAQ team and AoA representatives spoke with NCOA to understand the intake process and data collection procedure for CDSMP participants. After an initial call, the IMPAQ team also participated in a webinar for the CDSMP data collection tool. Following the webinar, a list of requests for NCOA data and other supporting documents was prepared, and they provided valuable information for the evaluation design. This section provides a brief overview of key NCOA data documents and demographic information on states and participants.

During communication with NCOA Representatives, we received and reviewed the following documents:

  • Workshop Forms. These include the attendance log, participant information survey, and the workshop information cover sheet.
  • CDSMP Demographic Report. This spreadsheet includes national data on the size of the program and breakdown of participants by key demographics.
  • CDSMP Grantee Reach. This document details by state the number of workshops hosted, number of participants enrolled, and number of participants who completed the program.
  • Demographic Reports for the 24 original AoA grantees. Similar to the CDSMP Demographic Report, these reports include state level data on the CDSMP program and demographic characteristics of participants.

To understand the national universe of CDSMP participants, there are several useful demographic statistics. For example, 27% of participants are under the age of 60, and the two leading chronic conditions are arthritis and hypertension. The program is disproportionally female (78%) and Caucasian (69%), and 17% report Latino ethnicity. Only 2% of all participants have taken the course previously. 

C.1 Evaluation Universe

According to recent data from the CDSMP technical assistance provider, in the last 12 months, 47 states provided CDSMP workshops to a little over 30,000 participants, with a workshop completion rate of 74 percent. Of these workshops, 81 percent were for the English version of the generic CDSMP, 8 percent were for the Spanish version of the generic CDSMP, and the remaining workshops were for various disease-specific CDSMPs. 27 percent of participants were under 60 years of age. AoA is not the sole funder of this program; the Centers for Disease Control and Prevention (CDC) is also a funder of CDSMPs. Our team proposes the following inclusion and exclusion criteria:

States: AoA is considering for evaluation only the States that are the original 24 AoA grantees (i.e., the states that received AoA funding in 2006 or 2007). These original grantees conducted about 75 percent of CDSMP workshops nationwide in the past 12 months. 

Type of workshop: We recommend limiting the evaluation to the generic CDSMP (both English and Spanish versions), which constitutes about 88 percent of all workshops.

Participants: We recommend limiting the population to persons aged 60 and older. AoA's target population is adults aged 60 or older. Approximately 70 percent of all participants fall into this category. Dual eligible are also included in this category.

Source of funding: We recommend that the evaluation be limited to host sites that received primarily AoA funding. Some of the host sites are primarily funded by sources other than AoA (e.g., CDC). We were unable to obtain data on the proportion of participants whose workshops were funded by non-AoA sources. The technical assistance contractor has indicated that sites with large proportions of younger participants are more likely to be funded by CDC. 

Population subject to the evaluation: After taking into account the inclusion and exclusion restrictions listed above, we estimate that a maximum of 14,500 individuals constitute the evaluation universe.

C.2 Challenges of Recruiting Adequate Sample Size

TA contractor data indicate that distribution of workshops over host sites is sparse and heavily skewed. Our team obtained host-level enrollment data for some of the largest eight grantees (California, Florida, Illinois, Michigan, North Carolina, Oregon, New Jersey, and New York) out of 24 states subject to the evaluation. The data show that half of the participants belong to only 27 host sites out of 172 total host sites. Most of the sites are very small: 102 sites are expected to provide less than 50 study participants in a random assignment evaluation.2 In order to be able to recruit an evaluation sample of 3,000, we estimate that the largest 85 sites will have to be included in the evaluation, which is logistically quite challenging. Furthermore, Recovery Act funding will expire in March 2012 causing more challenges to recruit adequate number of subjects from a reasonable number of host sites. These observations form the basis of our recommendations that 1) At least the largest 20 sites should be picked as evaluation sites to ensure adequate sample size 2) AoA provides additional funds to evaluation sites to maintain and double enrollment in CDSMP. Even though including only largest sites will weaken the representativeness of the evaluation, this seems to be necessary choice to have a feasible study. The evaluation contractor should explore other options as they may become more feasible due changes in policy, funding, and implementation.

As part of an environmental scan task, the evaluation contractor is strongly recommended to reconsider the key assumptions and reassess the feasibility of obtaining adequate sample size. It may also be necessary to interview some of the sites to get a sense of their projected enrollments during the intake.

In light of these potential challenges in recruiting evaluation subjects, we are also proposing an alternative design (Propensity Score Matching) which does not require as many evaluation subjects and does not involve random assignment. This alternative design is described in Chapter 5.

Return to Contents

D. Key Findings from Technical Expert Panel (TEP)

On March 30, 2011, the IMPAQ team, in collaboration with AHRQ and AoA, held a Technical Expert Panel (TEP) meeting with three CDSMP experts. Generally, the experts thought that the evaluation design was feasible, and offered suggestions to improve the report, namely to provide greater detail in the approach to key tasks and to offer options for the evaluation contractor. This section will provide an overview of the TEP meeting discussions. The minutes of the TEP meeting are presented in Appendix E.

The TEP made the following suggested revisions to the Design Report:

  • Despite the limitations and difficulty of using Medicaid data, TEP members thought that this should be included as an option in the evaluation. They stated that state governments are particularly interested in the impact of CDSMPs on Medicaid costs (for dual eligibles) and would be willing to comply, when possible. There are a number of new Medicaid initiatives funded by Affordable Care Act that potentially incorporate CDSMPs: Health Homes (Section 2703), Medicaid Incentives for Prevention of Chronic Diseases Program (Section 4108), and State Demonstrations to Integrate Care for Dual Eligible Individuals (Section 2602). It was recommended that, at the very least, the evaluator should identify and track dual eligibles. Medicare administrative data on dual eligibles may allow estimation of some components of Medicaid expenditures on dual eligibles. The evaluator should assess any such relationships in order to try to estimate the impact of CDSMP on some Medicaid aspects.
  • The evaluation should expand site selection to reflect the diversity of the CDSM program. All TEP members felt strongly that site selection should include sites that represent the range of experience in settings (e.g., sites from rural areas or frontier states) and sites featuring the Spanish-language version of the program, "Tomando Control de Su Salud." TEP members thought that inclusion of both types of sites was necessary to capture the diversity of the CDSMP.
  • In addition to the recommended control variables, TEP members encouraged the collection of other control variables, such as participant insurance status, receipt of Older Americans Act (OAA) services, and language of instruction for the course.
  • The TEP also recommended that more details be given on exclusion factors for participation in the study, especially if the participant previously took the course (course repeater); was a proxy for a family member or friend with a chronic condition (proxy); or scored low on a cognitive screen for the study.
  • A final TEP recommendation was that the evaluation contractor should consider the inclusion of "class zero," or an introductory session, where participants can learn about the program and the evaluation contractor can collect baseline data.

The TEP also raised some high-level evaluation issues. One TEP member suggested expanding the study beyond the original 24 grantees to increase sample size potential. In addition, AoA and the TEP members thought it was necessary to detail an alternative plan for when ARRA funding ends. The design contract has been revised to include an alternative plan in the event that sites are unable to provide the number of participants necessary to derive statistically significant results.

All of above suggestions were considered and integrated into revisions of the design report. In particular, to address TEP comments, we expanded the section on the analysis of participant characteristics in Chapter 5. Within each chapter, revisions have been made to address key points raised by the TEP.

Return to Contents

E. Further Exploration Needed

Although much information was obtained in the interviews with CDSMP representatives and the review of the published, peer-reviewed literature, as described above, questions remain for the evaluation design that may be best answered through an environmental scan. For example, in the interval between the current design study and the national CDSMP evaluation, policy, programmatic, or funding changes may influence how CDSMPs are administered or implemented. Another important area is to reassess the feasibility of the design and make any changes if necessary. This step is critical ensuring a viable design in light of uncertainties in future CDSMP funding. Furthermore, there are potentially sweeping changes initiated through the healthcare reform some of which have evidence-based interventions such as CDSMPs at their center. Discussions with the policy makers and stakeholders may lead to revised or added research questions. The goal is to fully utilize this great opportunity of rigorously evaluating a key program and to ensure participation of various stakeholders who are utilizing CDSMPs as a way to change the way health care is delivered. Particular attention must be given to related concurrent policies that are being implemented. The evaluation contractor needs to understand and incorporate any such changes into the final evaluation design.

Other areas for which additional information could be obtained from an environmental scan include:

  • Reassessing the feasibility of design based on fresh CDSMP program data and new policy environment
  • Identifying Medicare outcomes that would be proxies to Medicaid outcomes
  • Assessing the extent of States' outreach to Medicaid enrollees; identifying a number of states which has robust CDSMP outreach to Medicaid beneficiaries; among these states identifying/recruiting a few states interested in sharing Medicaid data,
  • Conducting focus groups to facilitate decision making on design-related issues such as Medicaid-related issues and feasibility of implementing the RCT. It will also provide valuable input for the evaluation design.
  • Identifying other evidence-based educational initiatives that could introduce bias or contamination to the site selection process (e.g., CMS-funded Quality Improvement Organization pilot study to test the feasibility of adding a CDSMP benefit to Medicare, and the Texas A&M/Stanford study to replicate the original Lorig studies).
  • Identifying appropriate instruments with good psychometric properties appropriate for use in the elderly to capture self-reported health status, health behavior, and self-efficacy.
  • Identifying sites with statewide enrollment systems to understand these CDSMP programs and their data or enrollment systems.
  • Identify the number of sites utilizing a "class zero," or an information class about the program.
  • Assessing the feasibility of sites performing a cognitive screen (i.e. Mini Mental State Examination) at time of enrollment into the course. While sites do not currently perform this screen, the evaluation contractor should consider excluding participants with cognitive impairment.

Another area of activity is helping CDSMP grantees which are participating in the evaluation develop a uniform application system. Activities include:

  • Work with AoA, the sites, and possibly with the TA contractor to design a uniform registration and baseline data collection system at each evaluation site. This system is supported by a Participant Tracking System (Chapter 5).
  • Reviewing and understanding current practices at each evaluation site.
  • Developing a system into which each site with potentially different current practices will be integrated.

E.1 Measuring Program Differences

Given the diversity of the CDSM program, we recommend the evaluation contractor consider the below factors deemed to influence program success. This section identifies four domains that contribute to the success of the program and potentially impact participant outcomes. After a review of key organizational and resource factors, we suggest additional staffing and participation factors which arose during site interviews and discussions with the TA contractor.

The evaluation contractor should capture the following four domains at the host site level through a web-based survey:

  • Organization and Resource Factors:
    1. Organizational structure: Who is the individual or agency responsible for running the program? Who holds the license?
    2. Financing structure and strategy: What funding and additional resources are available? Is there funding instability?
  • Staffing and Participation Factors:
    1. Experience of facilitator or leader.
    2. Number of course completers & repeaters.

Currently, research is underway to explore how program factors influence CDSMP participant outcomes (Lorig, personal communication, Sept. 7, 2010). Key domains of interest include organizational structures and financial structures and strategies. The organizational or reporting structure is an important consideration, with variability in the agency or individual running the CDSMP. For example, the organization that supervises and administers the program ranges from small private organizations to large organizations with a longer history of with CDSMP programs. The organizational structure is also influenced by who holds the license and is legally responsible for the course. It is hypothesized that program success may be influenced by the type of individual or agency administering the program. Secondly, resources, such as financial and other resources available, are likely to create differences across programs. A single site may receive funding from four or five organizations, each with a potentially different interest in the program. It is expected that sites with greater funding will produce higher quality courses and attract more experienced facilitators/leaders. An indicator of funding instability should also be included in the web survey to capture programs that will lose their funding in the near future. We expect that these organizational and resource factors create disparities across sites and influence participant compliance and program success.

Staffing and participation are a second group of domains to include in the web-based survey. We expect that facilitators or leaders with stronger training, longer experience, and similar background to participants will produce programs that have a greater and longer influence on individual outcomes. Variables of interest include years of experience, number of previous CDSM programs taught, educational background, and fluency in other languages. We also recommend the web-survey collect information about course completers and repeaters. The number of course completers serve as a proxy for site strength and the number of course repeaters measures whether participants at each site have previously taken a CDSMP course. We also suggest the evaluation contractor analyze the concentration of course completers and repeaters across sites to better understand high performing sites. We expect that individual-level experience will be influenced by the domains described in this section. Given the changing policy environment and dynamic nature of the program, we suggest that the evaluation contractor consider additional program factors at the time of the evaluation.


1. Medicare and Medicaid managed care enrollees will need to be excluded from the utilization and cost analyses due to data limitations.

2. Key assumptions are: 1) Intake period is 12 months, 2) 30% of workshop participants will agree to participate in the study and thus agree to be potentially denied the workshop with 50% chance, and 3) 10% of workshop participants will not be eligible to participate in the study (repeat customers, have dementia, etc.). 4) All of the host sites will be willing to participate in the evaluation.


Return to Contents
Proceed to Next Section

Page last reviewed December 2014
Page originally created May 2011
Internet Citation: Chapter 3. Inputs to Designing the CDSMP Evaluation. Content last reviewed December 2014. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/research/findings/final-reports/aoa/aoachronic-3.html