Page 1 of 1

Monitoring and Evaluating Medicaid Fee-for-Service Care Management Programs

Chapter 2. Getting Started (continued)

Action Step 4: Determine How Much Funding Is Available

Funding plays a critical role in defining how much a State can invest in its evaluation efforts. Some States elect not to budget a specific amount for CM evaluation activities. Rather, their evaluation activities may be subsumed within a variety of contracts and general administration budget line items or may not be budgeted at all. Other States build evaluation costs into their vendor contracts. Regardless of the model, a comprehensive evaluation will require resources, both human and financial, and should be planned for accordingly.

Human Resources

First, you should ensure that you have qualified staff available to focus on the evaluation. The number and type of staff members needed on an evaluation team will vary depending on the scope of the evaluation and whether you decide to conduct parts of the evaluation in-house, contract it out, build components into your CM vendor contract, or use some mixture of these arrangements. In fact, many States do include a mix of in-house and outsourced components of their evaluation.

In-House Resources

At minimum, you are likely to need a senior-level, in-house staffer who can devote a portion of his or her time to oversight of the evaluation project. Additional staff resources are generally preferable. Senior staff and program managers should play an active role in identifying the evaluation questions, determining which data and methods to use, and interpreting the findings, regardless of the evaluation arrangement you choose. Also, as a general rule, program managers should plan to delegate the data analysis and the oversight of many of the analytic details to qualified internal or external resources.

If any of the evaluation activities are conducted in-house, you will probably also need a program analyst or policy analyst to manage day-to-day evaluation tasks, a clinician to assist with the design of measures and interpretation of findings, and at least one data programmer/statistician to perform data analyses.

All members of the CM program staff, whether directly involved in the evaluation or not, should expect to devote some portion of their time to the evaluation and performance monitoring efforts. For example, the staff person who oversees the CM hotline may not be directly involved in the program evaluation but will likely contribute in some manner, e.g., by pulling hotline call records and/or being available for a key informant interview.

External Resources

If you use an outside evaluator (e.g. an actuarial firm, research group, or local university), it is best to involve the evaluator as early as possible in the program and evaluation design. As an example, Texas, which uses an actuarial firm for most of its CM evaluation activities, found that involving the actuary at the point of program design and implementation was critical.13

States that have savings or performance guarantees may find that their evaluations are subjected to a high level of scrutiny, and outside help may be necessary to conduct a rigorous evaluation. Outside review of the proposed evaluation approach can be helpful even if parts of the evaluation are to be conducted in-house. In addition, a small amount of technical assistance from an expert can significantly enhance staff capabilities.

Suggestion for Vendor-Run Programs

If you are contracting with a CM vendor, it is advisable to clarify up front what your expectations of the vendor are with regard to ongoing performance monitoring and evaluation activities. Which portion of the evaluation is each party responsible for conducting? Remember to also build in some flexibility with your vendor for ad hoc data requests to meet any unanticipated evaluation needs.

You will also need to determine how vendor-reported data and evaluation findings will be validated-including verification of the credibility of the methods used to generate findings-and how much the validation activity will cost. Addressing these questions early will help both you and the vendor allocate the appropriate resources to meet your evaluation goals.

Lessons from the Field

In Pennsylvania, the State contracts out portions of its evaluation while also retaining some evaluation responsibilities in-house. The State contracts with an actuarial firm to conduct a cost-effectiveness analysis. Pennsylvania also uses a health policy analyst to do in-house quality and financial performance evaluation. In its CM vendor contract, the State set forth specific evaluation expectations and an evaluation timeframe. When the State prepared an independent program assessment for its 1915(b) waiver renewal, it used evaluation expertise from the State's Comptroller's office.22

In Vermont, the legislature passed the "Blueprint for Health" Chronic Care Initiative in 2005. This initiative includes Medicaid and all other Vermont health care payers. The "Blueprint for Health" is the State's plan for implementing a chronic care infrastructure that includes the prevention of chronic conditions and chronic CM programs. "Blueprint for Health" applies an integrated approach to patient self-management, community development, health care system and professional practice change, and information technology initiatives.23

Vermont issued separate requests for proposals (RFPs) for the evaluation, monitoring, and provision of services for its CM program. This approach divides up these functions to ensure that the specific goals of each part of CM are met, underscoring the importance of evaluation and performance monitoring. The RFPs also emphasize transparency and collaboration as core components of CM evaluation and implementation.

Other Resources

In addition to human resource costs, you will also incur costs associated with the execution of the evaluation and the dissemination of findings. These costs will be a function of the data sources and research methods you use. Costs will also depend on how much you choose to invest in engaging stakeholders and disseminating results.

Your investment in data resources will depend on the availability and quality of your data and the complexity of your methodologies. For example, if you already have high quality claims data available, your costs may be limited to the expenses of scrubbing and formatting the data. If your data are less reliable, you may need to make greater investments in your data systems to support the analyses. You may also find that acquiring supplemental data through medical record reviews or primary data collection (e.g., surveys, or focus groups) can be extremely informative but is relatively costly and labor intensive. However, carefully selecting random samples of adequate size may help you maximize your investment.

Innovative Data Collection Strategies

Several States have expanded their portfolio of measures through innovative data collection strategies and strategic partnerships. In North Carolina, the Medicaid program contracted with Area Health Education Centers (AHECs) to collect medical record data in a way that is less expensive.21 AHECs are located in nine regions of North Carolina and provide educational programs and information services to the health care workforce.24 A team of six auditors on staff with the AHECs conducts independent medical record reviews on random, practice-specific samples. The Medicaid program reimburses the AHECs on a per-chart basis ($20 per chart plus a 5 percent administrative fee). In 2006, the State spent almost $400,000 to review approximately 9,000 charts for patients with asthma and 8,500 charts for patients with diabetes.

In Indiana, the State partnered with the Regenstrief Institute to access electronic medical records (EMRs) to enhance its capabilities for clinical measurement.25 In an urban, multi-site group practice that serves a large number of patients with Medicaid coverage, the presence of an EMR system gave the team the opportunity to use both clinical data and Medicaid claims data for program support and evaluation. EMR data can help in analyses of baseline characteristics of potential CM participants and in sample size calculations for a controlled trial. Clinical data can also help validate population selection algorithms that are based on Medicaid administrative data, and they can provide information on some variables (such as HbA1C levels) that are not recorded in Medicaid claims.

Washington Medicaid uses a client services database maintained by the Research and Data Analysis Division at the Washington Department of Social and Health Services to do ad-hoc analyses and drill down on particular evaluation questions.12 The database compiles Medicaid eligibility and claims data with mental health, long-term care, and developmental disabilities data to give a more complete picture of the enrollee experience. Washington, like a number of other States, chose to modify the Consumer Assessment of Healthcare Providers and Systems (CAHPS™, rather than creating a new survey, to collect client satisfaction data for their CM population.

Some States have partnered with their External Quality Review Organization (EQRO) or Quality Improvement Organization (QIO) to collect hospital records. Virginia is considering ways to offer incentives to providers to encourage their involvement in clinical data collection.a

aNote, in any initiatives using medical record data for evaluation purposes, extreme care must be taken to remove individual patient identifiers.

Lessons from the Field

The experience of Texas Medicaid in refining their data system exemplifies the need to factor in expenses that can result from data collection and evaluation. Texas staff spent hours creating a system to generate data and exchange information between the State's system and the contractor's system. Because Texas Medicaid was involved in major information technology (IT) system upgrades at the same time they were also designing their CM program, staff found that they needed a senior-level program champion on their operations/IT staff to ensure their specific CM data and evaluation needs were prioritized. CM staff met with the IT team (including the champion and at least one IT programming specialist) weekly as they were developing their data system. They found that long-term continuity in core staff involvement was critical to the success of their program.

Return to Contents

Action Step 5: Select Evaluators

A good program evaluator should have strong critical thinking and be skilled in analysis. The evaluator should have a thorough understanding of how your CM program operates and should be familiar with the attributes and limitations of your data. The evaluator(s) for your CM program may be independent of your program or come from within your organization. Some States use a mix of outside and in-house evaluators.

Deciding who will conduct your evaluation depends on several factors, including whether you have a vendor-run CM program, what your evaluation questions are, and what types of analytic methods and data sources you use. For example, if your evaluation questions are related to the economic impact of your CM program, you may need a different type of evaluator than if your questions focus primarily on clinical impact.

States that have invested in building up in-house research and evaluation units may opt to conduct some portion of their evaluations in-house. Those that do not have staff focused on evaluation will typically look outside for evaluation expertise and use in-house staff primarily for ongoing program monitoring. States that have a data warehousing arrangement with a local university or other research partner may also opt to outsource portions of their evaluation. As a general rule, actuarial evaluations require States to solicit outside expertise, which likely will not come from a university.

You should carefully consider your organizational capacity when making decisions about who will conduct your CM evaluation. Resource availability will certainly influence that decision. In-house evaluation efforts, like outsourced efforts, have strengths and weaknesses. For example:

  • In-house evaluation activities provide an opportunity for building in-house expertise and capacity. However, in-house evaluation efforts, particularly if conducted by your CM vendor, may be criticized for a lack of objectivity. If you conduct portions of your evaluation in-house, you may want to consider having an external advisory committee made up of recognized program and evaluation experts to provide input on evaluation questions, research design, and interpretation of findings.
  • An outsourced evaluation is generally viewed as more objective than an evaluation conducted in-house. However, independent evaluators may not be entirely free of conflicts of interest. No matter how much detail your contractor shares with you, there may still be a "black-box" element, which can be significant. Plan to conduct some form of independent validation and verification of your contractor's findings, which can be difficult unless there is transparency about the methods used.

It is also important to note that university partners or other collaborating organizations may not be perceived as completely independent if they are affiliated with your Medicaid program. If your local university or other organizational affiliate has been involved at any stage of the CM program development and management, there will be a natural allegiance and a desire to see the CM program proven effective. This may diminish the perceived objectivity of your evaluation. Addressing this perceived conflict upfront will promote both transparency and assist others in clear interpretations of the findings.26

Lessons from the Field

As Washington's CM program evolved over time, the State used four different evaluators: in-house staff, CM vendors, an external clinical evaluator, and an actuarial firm. Washington has continually used its in-house staff to do some performance monitoring and evaluation activities and its CM vendors to provide client-reported health status, access, and satisfaction data. The State used the University of Washington to conduct a clinical evaluation of the first year of the CM program and contracted with their EQRO to do the clinical evaluation thereafter. The State uses an actuarial company to examine CM cost savings. In the future, Washington's EQRO will synthesize findings from all the evaluators.12

The involvement of several evaluators has allowed Washington Medicaid to take advantage of in-house evaluation capacity while also leveraging specific outside expertise from several contractors. Each evaluator provided a different perspective on the effectiveness of Washington's CM program, which in some cases had the benefit of providing an opportunity for comparisons and validation. While the use of multiple evaluators has led to variations in methodologies, definitions, and measures, the State has found that the greatest challenge of having multiple evaluators has been presenting the findings.27 In particular, juxtaposing the vendor-reported results with findings from other evaluators has not always provided a clear story.

Stakeholder Involvement

States should also include key stakeholders (such as consumers, advocates, providers, legislators, vendors, and other interested parties) throughout the evaluation process, from the design stages to the interpretation of the findings. In particular, it will be important to gain stakeholder involvement in determining the appropriate outcomes to measure. If you design and execute your evaluations through a collaborative and transparent process, you will find that the evaluation gains credibility. In addition, States may find that they can use the evaluation as a tool for building stakeholder involvement in the CM program. Increased involvement, particularly within the provider community, may strengthen the overall CM program. It is important to note, however, that actively involving stakeholders may require a significant investment of time and energy on the part of program staff.

Lessons from the Field

North Carolina involved its provider community in every aspect of its CM program. The State found that the time spent creating a culture of cooperation has had a tremendous impact as the State builds the program. Providers, in consultation with the State, choose which performance measures are most appropriate for each condition. Program administrators find that their coordination with providers has made the program and its evaluation efforts more sustainable over time because providers are invested in the goals of the program and the outcomes of the evaluation. They also recognize that once a structure has been put in place to communicate with providers, the program is able to more easily troubleshoot programmatic and evaluation issues.21

In Indiana, a rural health center emerged as a "pathfinder" or vanguard practice. Through close collaboration with this health center, the State and its evaluator, the Regenstrief Institute, identified certain characteristics that varied between practices. The partnership has helped the State drill-down in specific practice areas and test evaluation strategies.25

Some States with vendor-run programs have used their vendors to establish provider advisory boards. Texas, for example, contractually requires its CM vendor to engage providers on an ongoing basis through quarterly meetings and regular communications. The provider advisory board has been extremely active throughout the history of the Texas CM program. The board plays an ongoing role in helping the State identify performance measures and recommend program modifications or policy changes.13

Virginia has designed a new CM program that is very patient-centered. The State, with their CM vendor, has carefully created a process for engaging enrollees, family members and personal representatives. Consumer engagement will continue to be a priority as Virginia develops its evaluation plan.20

Return to Contents
Proceed to Next Section

Current as of November 2007
Internet Citation: Monitoring and Evaluating Medicaid Fee-for-Service Care Management Programs. November 2007. Agency for Healthcare Research and Quality, Rockville, MD.