- Evaluation Objectives
- Data Sources
- Demonstration Projects by Grant Category
- Evaluation Questions
- The Team
- Learn More
In August 2010, the Agency for Healthcare Research and Quality (AHRQ), in partnership with the Centers for Medicare & Medicaid Services (CMS), awarded a contract to Mathematica Policy Research and its partners, the Urban Institute and AcademyHealth, to evaluate this demonstration grant program—the largest evaluation of child health care quality currently underway in the United States. Together, the grant program and its evaluation are among the Nation's most important efforts to improve the quality of health care for children enrolled in Medicaid and the Children's Health Insurance Program (CHIP).
The national evaluation team is charged with conducting a rigorous evaluation of the CHIPRA Quality Demonstration Grant Program to determine the impact of grantee activities on the quality of children's health care. (Key members of the team are listed at the end of this document.)Top of Page
As described elsewhere, 18 States are implementing 51 projects in 5 broad categories (go to Table 1 below). The national evaluation team will be gathering many different kinds of qualitative and quantitative data about these projects and about the collaboration among grantees and their partners. We will analyze this information to conduct a multi-level evaluation that aims to:
- Assess the implementation of single projects independently of all others, focusing on whether the project's goals and objectives were achieved.
- Combine information across projects within a single grant category to identify effective strategies and successful outcomes.
- Examine how specific States improved the quality of children's health care by implementing multiple projects and describe how the activities in one grant category supported or enhanced projects in other categories.
- Determine the extent to which collaborations among States contributed to the success of the demonstration activities.
- Assess the overall benefits of the demonstration program by comparing selected outcomes of the participating States with those of non-participating States.
- Examine the contributions of demonstration activities to improvements in quality of care with respect to four special interest areas: oral health, obesity, behavioral health, and Early and Periodic Screening, Diagnostic, and Treatment (EPSDT) programs.
- Provide insights into the successes and limitations of the program to inform future Federal demonstration efforts.
Sources of quantitative data include administrative and claims data, original survey data from providers, and data from the State or the grantee-specific evaluation teams. Qualitative data sources include program documents and reports, key informant interviews with program staff and stakeholders, and information gleaned from focus groups with providers and families.Top of Page
Demonstration Projects by Grant Category
|Using Quality Measures to Improve Care Quality||Promoting Use of Health IT to Enhance Quality of Care||Evaluating a Provider- Based Model||Testing a Model EHR Format for Children||Testing State-Specified Approaches to Improving Quality|
|Total Projects in Category||10||13||17||2||9|
Source: Final operational plans, evaluation addenda, and evaluation planning calls with grantees.
Note: Health IT= health information technology; EHR= electronic health record.
AHRQ and CMS have identified 20 broad research questions and well over 200 detailed ones that the national evaluation might address in evaluating the demonstration projects. Examples of the broad questions include the following:
- How did stakeholders use the initial set of core quality measures for children, and what was their impact on the delivery system?
- What health information technology (IT) or health IT enhancements were effective in improving quality of care or reducing costs?
- How were the provider-based models implemented by the demonstration States, and did they change children's health care quality?
Findings from the national evaluation will be organized, published, and disseminated in ways that address the needs of stakeholders, including Congress, AHRQ, CMS, States, the provider community, and family organizations. The national evaluation team will disseminate results of analyses through replication guides for other States, Web site postings of Medicaid and CHIP Promising Practices and AHRQ Innovations, conference presentations, manuscripts in professional journals, issue briefs for consumers and policymakers, and a comprehensive final report—most of which will be housed on this Web page.Top of Page
Listed below are key members of the national evaluation team:
Agency for Healthcare Research and Quality (AHRQ)
Cindy Brach, M.P.P.
Stacy Farr, M.P.H.
Center for Medicare & Medicaid Services (CMS)
Karen Llanos, M.B.A.
Mathematica Policy Research, Inc.
Henry Ireys, Ph.D.
Leslie Foster, M.P.A.
Anna Christensen, Ph.D.
Grace Ferry, M.P.H.
Catherine McLaughlin, Ph.D.
Brenda Natzke, M.P.P.
Chris Trenholm, Ph.D.
Kelly Devers, Ph.D.
Rachel Burton, M.P.H., M.P.P.
Ian Hill, M.P.A., M.S.W.
Genevieve Kenney, Ph.D.
Stacey McMorrow, Ph.D.
Lisa Simpson, M.B., B.Ch., M.P.H., F.A.A.P.
Select for current findings, reports, and other resources from the national evaluation.
For additional information about the national evaluation:
- Reference the Evaluation Design Plan (PDF file; 152 KB); Plugin Software Help.
- Contact: CHIPRADemoEval@ahrq.hhs.gov.
Please note: This Web site uses the term "national evaluation" to distinguish this evaluation of the entire demonstration program from evaluations commissioned or undertaken by grantees. The word "national" should not be interpreted to mean that findings are representative of the United States as a whole.