Agency for Healthcare Research and Quality
U.S. Department of Health and Human Services
540 Gaither Road
Rockville, MD 20850
Mathematica Policy Research, Princeton, NJ
Project Director: Deborah Peikes
Principal Investigators: Deborah Peikes and Erin Fries Taylor
Deborah Peikes, Ph.D., M.P.A., Mathematica Policy Research
Erin Fries Taylor, Ph.D, M.P.P, Mathematica Policy Research
Janice Genevro, Ph.D., Agency for Healthcare Research and Quality
David Meyers, M.D., Agency for Healthcare Research and Quality
None of the authors has any affiliations or financial involvement that conflicts with the material presented in this guide.
The authors gratefully acknowledge the helpful comments on earlier drafts provided by Drs. Eric Gertner, Lehigh Valley Health Network; Michael Harrison, AHRQ; Malaika Stoll, Sutter Health; and Randall Brown and Jesse Crosson, Mathematica Policy Research. We also thank Cindy George and Jennifer Baskwell of Mathematica for editing and producing the document.
This project was funded under contract HHSA290200900019I from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The opinions expressed in this document are those of the authors and do not reflect the official position of AHRQ or the U.S. Department of Health and Human Services.
AHRQ Publication No. 14-0069-EF
Quick Start to This Evaluation Guide
Goals. Effective primary care can improve health and cost outcomes, and patient, clinician and staff experience, and evaluations can help determine how best to improve primary care to achieve these goals. This Evaluation Guide provides practical advice for designing real-world evaluations of interventions such as the patient-centered medical home (PCMH) and other models to improve primary care delivery.
Target audience. This Guide is designed for evaluators affiliated with delivery systems, employers, practice-based research networks, local or regional insurers, and others who want to test a new intervention in a relatively small number of primary care practices, and who have limited resources to evaluate the intervention.
Summary. This Guide presents some practical steps for designing an evaluation of a primary care intervention in a small number of practices to assess the implementation of a new model of care and to provide information that can be used to guide possible refinements to improve implementation and outcomes. The Guide offers options to address some of the challenges that evaluators of small-scale projects face, as well as insights for evaluators of larger projects. Sections I through V of this Guide answer the questions posed below. A resource collection in Section VI includes many AHRQ-sponsored resources as well as other tools and resources to help with designing and conducting an evaluation. Several appendices include additional technical details related to estimating quantitative effects.
- Do I need an evaluation? Not every intervention needs to be evaluated. Interventions that are minor or inexpensive, have a solid evidence base, or are part of quality improvement efforts may not warrant an evaluation. But many interventions would benefit from study. To decide whether to conduct an evaluation, it's important to identify the specific decisions the evaluation is expected to inform and to consider the cost of carrying out the evaluation. An evaluation is useful for interventions that are substantial and expensive and lack a solid evidence base. It can answer key questions about whether and how an intervention affected the ways practices deliver care and how changes in care delivery in turn affected outcomes. Feedback on implementation of the model and early indicators of success can help refine the intervention. Evaluation findings can also help guide rollout to other practices. One key question to consider: Can the evaluation that you have the resources to conduct generate reliable and valid findings? Biased estimates of program impacts would mislead stakeholders and, we contend, could be worse than having no results at all. This Guide has information to help you determine whether an evaluation is needed and whether it is the right choice given your resources and circumstances.
- What do I need for an evaluation? Understanding the resources needed to launch an intervention and conduct an evaluation is essential. Some resources needed for evaluations include (1) leadership buy-in and support, (2) data, (3) evaluation skills, and (4) time for the evaluators and the practice clinicians and staff who will provide data to perform their roles. It’s important to be clear-sighted about the cost of conducting a well-designed evaluation and to consider these costs in relation to the nature, scope, and cost of the intervention.
- How do I plan an evaluation? It's best to design the evaluation before the intervention begins, to ensure the evaluation provides the highest quality information possible. Start by determining your purpose and audience so you can identify the right research questions and design your evaluation accordingly. Next, take an inventory of resources available for the evaluation and align your expectations about what questions the evaluation can answer with these resources. Then describe the underlying logic, or theory of change, for the intervention. You should describe why you expect the intervention to improve the outcomes of interest and the steps that need to occur before outcomes would be expected to improve. This logic model will guide what you need to measure and when, though you should remain open to unexpected information as well as consequences that were unintended by program designers. The logic model will also help you tailor the scope and design of your evaluation to the available resources.
- How do I conduct an evaluation, and what questions will it answer? The next step is to design a study of the intervention's implementation and—if you can include enough practices to potentially detect statistically significant changes in outcomes—a study of its impacts. Evaluations of interventions tested in a small number of practices typically can't produce reliable estimates of effects on cost and quality, despite stakeholders' interest in these outcomes. In such cases, you can use qualitative analysis methods to understand barriers and facilitators to implementing the model and use quantitative data to measure interim outcomes, such as changes in care processes and patient experience, that can help identify areas for refinement and the potential to improve outcomes.
- How can I use the findings? Findings from implementation evaluations can indicate whether it is feasible for practices to implement the intervention and ways to improve the intervention. Integrating the implementation and impact findings (if you can conduct an impact evaluation) can (1) provide a more sophisticated understanding about the effects of the model being tested; (2) identify types of patients, practices, and settings that may benefit the most; and (3) guide decisions about refinement and spread.
- What resources are available to help me? The resource collection in this Guide contains resources and tools that you can use to develop a logic model, select implementation and outcome measures, design and conduct analyses, and synthesize implementation and impact findings.