A final report of modest length must be selective in reporting the key findings and activities of a 61-month-long evaluation of a complex demonstration grant program. In this chapter, we have elected to synthesize the findings and insights presented in the products developed by the National Evaluation Team using, for the most part, the original grant categories. We encourage readers to review specific products for additional findings and nuances that we have not highlighted here. These products can be found in peer-reviewed journals and on the national evaluation’s Web site hosted by AHRQ (www.ahrq.gov/chipra/demoeval/).
Under Category A, 10 States were funded to collect, report, and assess the use of CMS’ Core Set of Children’s Health Care Quality Measures for Medicaid and CHIP (Child Core Set), as well as supplemental pediatric quality measures.4 Their objectives were to identify barriers to the collection and reporting of these measures and to build capacity for reporting and using them to improve the quality of care for children.
The Child Core Set was originally developed by AHRQ and CMS with substantial input from key stakeholders (including the organizations that developed and maintain measures included in the set). CMS released the initial technical specifications for reporting the Child Core Set in February 2011. The measures address a range of high-priority topics in child and adolescent health, such as access to primary care, preventive care (including vaccinations and developmental screenings), maternal and perinatal health (including prenatal care and low birthweight rate), behavioral health (including followup after hospitalization for mental illness), care of acute and chronic conditions (including medication management for asthma), oral health care (including dental visits for prevention and treatment), and patient/family experience with care.
State Medicaid/CHIP agencies began voluntarily reporting some State-level measures to CMS in 2011 for the Federal fiscal year (FFY) 2010 reporting period. CMS subsequently updated the Child Core Set. Specifically, CMS changed data sources for three measures for FFY 2012, retired one measure and added three measures for FFY 2013, and retired three measures for FFY 2014 reporting. The States’ performance on these measures can be found in the Secretary’s Annual Report on the Quality of Care for Children in Medicaid and CHIP, usually released in October of each year.5
In addition to State-level reporting of the Child Core Set to CMS, there is the potential to use these measures for reporting by health care organizations, such as child-serving practices, health
systems, and managed care organizations (MCOs). Beyond just reporting performance on the Child Core Set, States, MCOs, health systems, and practices can use the measures in quality improvement (QI) initiatives. Focusing on these two general activities of Category A (reporting measures and using them for QI) States produced the following findings, which we discuss below in more detail:
- States encountered a variety of barriers to reporting the Child Core Set to CMS and developed diverse methods to address the barriers.
- States applied a range of strategies for using quality measures as part of broader QI initiatives.
- Practices encountered numerous challenges to reporting quality measures (including but not limited to the Child Core Set), and some developed methods to address them.
- States developed diverse strategies for overcoming barriers providers faced in using measure reporting to improve quality of care.
1. States encountered a variety of barriers to reporting the Child Core Set to CMS and developed diverse methods to address the barriers.
Using information from Illinois, Maine, Oregon, and Pennsylvania, the NET developed a manuscript (under review) entitled “What factors influence the ability of State Medicaid agencies to report the Child Core Set of health care quality measures? A multicase study.” Analysis of the study yielded the following findings:
- Key factors affecting a State’s ability to report the Child Core Set measures to CMS included:
- Technical factors, such as clarity and complexity of measure specifications; data availability, completeness, and linkages; and software capabilities.
- Organizational factors, such as a history and culture of data use, support from agency and other State leadership, and availability of skilled programmers.
- Behavioral factors, such as staff motivation and external demand for measures.
- State health care policy environment, including the structure of Medicaid and CHIP agencies, the level of managed care, and other health care reform activities.
- Participation in external capacity-building activities, such as through the CHIPRA quality demonstration.
- States used numerous resources and significant time to interpret and apply CMS’ specifications to available State-specific data.
- Access to fee-for-service claims data enables but does not guarantee that all administrative measures can be accurately reported.
- Providers must consistently use the billing codes in the measure specifications, otherwise the measure will underestimate quality of care.
- In some cases, States have one billing code to cover multiple types of services (for example, developmental screening and behavioral health screening). Such codes cannot be used to measure receipt of each specific service.
- States typically faced major technical challenges linking Medicaid/CHIP data to other data sources, such as immunization registries and vital records, to produce quality measures.
- States had a difficult time producing core measures that require EHR data because most States, health systems, and practices have not yet developed the infrastructure needed to support data transmission from providers’ EHRs. Another challenge was that most Child Core Set measures are not yet specified in the standardized Health Quality Measure Format language for EHR reporting.6, 7
- Diverse stakeholders in most States expressed a demand for children’s health care quality measures reported regularly at the health system, health plan, or practice level rather than annual reports at the State level. The Child Core Set was not designed for practice-level reporting, but many stakeholders wanted to use the measures at the practice level.
- Adapting the Child Core Set measures for these various levels requires modifications to the original measure specifications. These modifications and other State-to-State variations in measure production processes may influence the ability of CMS and States to compare measures across States and use them to drive QI activities.
2. States applied a range of strategies for using quality measures as part of broader QI initiatives.
Evaluation Highlight 11 identified lessons learned about measure-based strategies that additional States can use to improve the quality of care. Analysis of information from Alaska, Florida, Illinois, Maine, Massachusetts, and North Carolina yielded the following findings:
- In some of these States, State-level QI activities were supported by quality reports that were developed for specific State audiences and that compared the State’s performance with neighboring or similar States, as well as with national benchmarks.
- Because improving performance typically requires a collective effort from many stakeholders, some States formed workgroups or held formal meetings to review quality measure reports with key stakeholders (including staff at child-serving agencies, large or influential practices, health plans, and health systems) with the goal of focusing on specific QI priorities.
- Improving quality of care required States to move beyond producing and disseminating quality measure reports to take one or more additional steps, such as the following:
- Establish regular procedures for monitoring quality of care at practice or health system levels, which can help identify providers who are lagging on certain measures.
- Implement policy and programmatic changes in clinical documentation procedures or billing processes, which can make data more accurate and timely.
- Provide individualized and group TA to practices and health systems through practice facilitation (also called QI or practice coaching), QI specialists, Webinars, and learning collaboratives that will help providers develop their own measure-based QI initiatives.
- Initiate statewide stakeholder engagement efforts that seek to build an enduring commitment to improving quality of care for children.
- Consider pay-for-reporting, pay-for-performance, pay-for-improvement, or other incentive programs to spur quality reporting and improvement.
Additionally, through a survey of physicians in two demonstration States (North Carolina and Pennsylvania) and one comparison State (Ohio), we found that the majority of child-serving physicians receive quality reports and believe they are effective for QI, but only one-third of these providers actually use quality reports in their QI activities. Physicians in the demonstration States used quality reports for QI at about the same rate as physicians in Ohio.
3. Practices encountered numerous challenges to reporting quality measures (including but not limited to the Child Core Set), and some developed solutions to address them.
Two of our Evaluation Highlights (1 and 5) describe lessons learned about facilitators and barriers that States and practices encounter as they work to report practice-level quality measures. Analysis of information from Maine, Massachusetts, North Carolina, Pennsylvania, and South Carolina—the States covered in these Evaluation Highlights—yielded the following findings:
- It was critical for States to collaborate with physician practices and providers in selecting or refining measures for QI projects because it built buy-in and ensured that measures were meaningful, feasible, and useful for practice-level improvement.
- Providers expressed preferences for measures that were timely, under the influence of the practices’ activities, and useful to the practice’s QI efforts.
- Both States and practices had to be flexible to reach agreement on measures that are high-priority, actionable, and appropriate for busy practices.
- It was unexpectedly time- and resource-intensive for States to adapt measures originally designed for reporting at the health plan or State level for use at the practice level. The administrative and technical steps needed to calculate quality measures at the practice level are quite different from the steps needed for the State level.
- States had to adjust specifications to fit the reporting capabilities and needs of practices, including testing new data sources and modifying the measure denominator to the practice level. However, the adjustments may compromise the reliability and validity of measures if specifications for practice-level measures move too far from original specifications.
- Accurately attributing patients to providers was especially challenging because some patients are not attached to specific providers, and some are administratively linked to one provider but actually seek care at another site.
- States used a variety of data sources to produce practice-level measures, including established State databases containing Medicaid claims and enrollment and eligibility data; statewide immunization registries; and Health Information Exchanges (HIEs) and provider-submitted data (direct EHR data or manual review of EHR or paper charts).
- It was important for States to plan for resources to manage unexpected data access and quality issues. States were able to overcome some challenges by having experienced data analysts and alternative data extraction plans in place.
- States attempted various strategies to overcome information technology (IT) and data infrastructure challenges, such as outdated or underdeveloped claims systems, HIE, and EHRs. Strategies included involving practices in data collection (via manual extraction of data from EHRs or charts) and developing workarounds with their EHRs. However, many of these activities relied on grant funding and staff and are not sustainable to support collecting and reporting practice-level quality measures in the long run.
4. States developed diverse strategies for overcoming barriers providers faced in using measure reporting to improve quality of care.
Our first and fifth Evaluation Highlights included lessons learned about facilitators and barriers that States and practices encountered as they worked to use practice-level quality measures to inform their own QI efforts. Analysis of information received from Maine, Massachusetts, North Carolina, Pennsylvania, and South Carolina and covered in these Evaluation Highlights yielded the following findings:
- When practice staff began to apply quality measures in their own practice, they often discovered clinical documentation limitations (such as incomplete or inconsistent documentation in EHRs and paper charts) and therefore had to make improvements in documentation so they could have accurate information for their QI efforts.
- When practice staff first generated quality reports based on accurate data, they frequently discovered that their performance was worse than they expected.
- For QI activities to be effective, they required the involvement of all staff (including physicians, nurses, and administrative staff). To engage staff, practices made them aware of quality measures, why they matter, and each person’s role in QI.
- Practices found measure reports more useful for identifying QI priorities than for guiding and assessing QI projects, mainly because data receipt often lagged; therefore it was difficult for them to use reports to assess and make adjustments to redesigned workflows in real-time.
- States used a variety of other strategies or combinations of strategies to support QI efforts at the practice level, including payments or stipends to participating practices, training, and TA.
- For example, to encourage QI, Pennsylvania offered pay-for-reporting and pay-for-performance incentives to participating health systems. Incentives included $10,000 per measure reported from an EHR for the base year (up to 18 measures, or $180,000) and $5,000 for each percentage point improvement per measure, up to five points, or $25,000 per measure, capped at a total payment of $100,000. The State offered relatively little TA.
- In contrast, South Carolina provided extensive TA rather than payments or stipends, and used the Child Core Set as the foundation for assisting primary care practices via a multiyear learning collaborative focused on quality improvement (plan-do-study-act) cycles. The State also provided practice staff customized support from practice facilitators.
- States had to invest substantially in both the human and automated components of data extraction to support use of EHRs for practice-level reporting. EHR-based reporting will never be fully automated. For example, each time an EHR was updated or modified, programmers and analysts had to reconsider data coding and modify procedures to report the measures.
The overall goal of the Category B projects was to identify effective strategies for using health IT to improve the quality of children’s health care, reduce Medicaid and CHIP expenditures, and promote transparency and consumer choice. Based on their final operational plans (developed in the first year of the demonstration), the 12 States that originally intended to implement Category B projects proposed to use several types of health IT and implementation strategies to pursue the goals for their projects (Table 2). These strategies included using various combinations of EHRs, personal health records (PHRs), and HIE pathways for multiple purposes. Purposes included automated reporting of the Child Core Set of quality measures; reporting of Early and Periodic Screening, Diagnosis, and Treatment (EPSDT) measures; supporting clinical decisionmaking; promoting QI in clinical settings; supporting the informational needs of public health agencies; fostering consumer engagement; and coordination across different types of providers (especially in connection with medical homes).
|Health IT Strategies||ORc||AKc||WVc||WYb||UTb||IDb||FLa||ILa||MEa||VTb||SCa||PAa||Total|
|Creating or enhancing a regional child health database/warehouse||✓||✓||✓||✓||✓||✓||✓||7|
|Linking databases across agencies||✓||✓||✓||✓||✓||✓||✓||7|
|Increasing access to data for targeted users||✓||✓||✓||✓||✓||5|
|Encouraging practices to use EHRs and quality measures||✓||✓||✓||✓||✓||✓||✓||7|
|E-reporting from practice to HIE/child health database||✓||✓||✓||✓||✓||✓||✓||✓||✓||9|
|E-reporting from HIE/child health database to practices and/or health agencies||✓||✓||✓||✓||✓||✓||✓||7|
|Devising/refining/implementing incentive payments based on reporting data||✓||1|
Source: State final operational plans.
✓ State planned to employ strategy in Category B demonstration.
a State planned to link some elements of its Category B project to its Category A project.
b State planned to linked some elements of its Category B project to its Category C project.
c State planned to linked some elements of its Category B project to both its Category A and C projects.
Most Category B States planned to implement or improve electronic reporting from practices to an HIE or children’s health database, including developing standard reporting tools, forms, and formats. South Carolina had explicit plans to offer incentives for reporting through payment reform. Most States also intended to pursue some form of electronic reporting from an HIE or children’s health database to practices or health agencies (for example, patient-level quality measure reports).
Based on information collected for the evaluation, we identified three findings about the Category B projects, which we discuss in further detail below:
- Most demonstration States faced major challenges that hindered implementation of their Category B projects.
- Projects involving the development of electronic screening methods were able to achieve their objectives.
- Projects that aimed to develop focused health IT applications were successfully implemented.
1. Most demonstration States faced major challenges that hindered implementation of their Category B projects.
A review of information collected during site visits and other discussions with project staff underscores the following obstacles States encountered while executing Category B projects:
- The diversity and turnover of EHR products used by practices and insufficient functionality in EHRs to collect and analyze data posed barriers to EHR use.
- As an example, South Carolina achieved limited success in producing practice-level quality measure reports by combining Medicaid claims data with EHR data. The limitation was largely because of the difficulties in developing the infrastructure and functionality needed to record and transfer pediatric data from practices’ EHRs to the States. The diversity of EHRs used by practices and the amount of modifications needed to those EHRs further complicated and delayed data extraction.
- Challenges related to interoperability between the practices’ EHRs and State databases, including HIEs, were common among many States. In many cases, these challenges went largely unresolved. Furthermore, most States had not yet developed the infrastructure, such as HIEs, to exchange EHR data with providers. As a result of these barriers, program staff in many States focused on other demonstration projects.
- As an example, in West Virginia, State program staff dropped their plan to create and implement a PHR—the primary goal of their Category B project—for two reasons. First, the platform would have duplicated the function in the EHRs that practices were already using, and second the State decided not to implement an HIE, which was necessary for the PHR to be implemented as planned.
- Challenges related to data ownership and security issues also stalled projects in some States.
- As an example, in Illinois, development of a statewide prenatal minimum electronic data set that would extract data from EHRs and link to the State HIE eventually foundered. The State was unable to finalize development because neither the State nor the vendor wanted to own the repository that was tested in the early stages of the grant.
- Practice staff often needed training and TA to effectively use their EHRs.
- As an example, in Alaska, participating practices needed substantial assistance to improve use of their EHRs to support practice functions and QI; as a result, there were few remaining grant resources in that State available for additional work in this grant category.
2. Projects involving the development of electronic screening methods were able to achieve their objectives.
Colorado and New Mexico implemented an electronic screening questionnaire.8 This computer tablet-based risk screening instrument, the electronic Student Health Questionnaire (eSHQ), was used by SBHCs to improve early identification of health risk behaviors and initiation of discussions about protective factors for adolescents.
Pennsylvania was also able to implement its electronic screening project as planned. This project involved introducing a fully electronic developmental screening questionnaire in 12 pediatric primary care sites associated with the Children’s Hospital of Philadelphia between 2011 and 2013.
Additional details regarding each of these projects are also available in special innovation features9 posted on the national evaluation Web site. These three States’ projects provide the following key findings:
- Technology can be used to streamline the administration of screening questionnaires to identify children with health risks, such as developmental delay or autism.
- The use of electronic screening tools in practices and SBHCs can enhance documentation that services were provided and can support data quality, tracking, and monitoring and a higher quality of care.
- Adolescents, families, and providers find electronic screening easy to use. Additionally, adolescents valued tablet-based screening as a way of communicating directly and privately with their doctors.
- Although electronic screeners afford many benefits, there are also costs to providers related to ongoing training and technical support.
3. Projects that aimed to develop focused health IT applications were successfully implemented.
Although many States halted their health IT efforts in response to challenges noted elsewhere in this report, two States were each able to implement stand-alone and specific health IT products that are likely to be sustained beyond the grant period.
- With support from the grant, Utah developed an online health platform that practices can use to share information about QI work including cumulative performance on quality measures and graphic depictions of data in a time sequence. This Web-based platform has been used for learning collaboratives and will form the basis of future QI activities in Utah. In addition, other States are using the platform, and their payments to Utah are now supporting maintenance costs.
- Wyoming developed a data dashboard to track CME performance on quality and output measures. The State will continue to use an expanded version of the dashboard to track CME quality under a new contract to expand CME services statewide.
The goal of the Category C projects was to develop, implement, and determine the impact of selected provider-based models on the delivery of children’s health care, including access, quality, and cost. All of the demonstration States except Pennsylvania implemented a Category C project. To achieve the Category C goals, grantees and partner States used one of three strategies: (1) transforming child-serving practices into PCMHs, (2) strengthening SBHCs; or (3) developing CMEs for children with serious emotional or behavioral disorders. These strategies sometimes overlapped and expanded, with SBHCs working to develop PCMH features and many PCMH projects strengthening practices’ general QI skills. We briefly describe each strategy here.
Transforming child-serving practices into PCMHs. Seven grantees, inclusive of 12 States, implemented efforts to enhance PCMH features of child-serving practices.10 These efforts involved varying combinations of strategies to promote practice transformation, including learning collaboratives, one-on-one QI facilitation, TA related to collecting and reporting quality measure data, TA related to building family engagement in practice activities and QI strategies, and practice stipends. About 140 child-serving practices participated in these efforts to some extent (excluding practices that served as comparison practices).11 Through interviews with project staff in the 12 States and staff in many of the participating practices, as well as focus groups with families whose children were patients of these practices, we gathered substantial qualitative data about these PCMH transformation efforts. We also reviewed medical home survey data submitted by States. We analyzed that information to address questions about implementation processes and perceived outcomes of these models. Because practice transformation was such a predominant activity within the demonstration, we devoted considerable effort to documenting our findings in four Evaluation Highlights (nos. 3, 7, 9, and 13) and three manuscripts.
Strengthening SBHCs. Colorado and New Mexico collaborated on efforts to enhance PCMH features of 22 SBHCs. These projects involved practice facilitators, engagement with youth and their families, and collaboration between SBHCs and other providers. Two Evaluation Highlights (nos. 3 and 8) described these efforts.
Developing or enhancing CMEs. Maryland, Georgia, and Wyoming aimed to enhance or develop ways for providing services to youth with serious emotional disorders. Specifically, these States examined means for locating oversight and coordination of services for children with serious emotional disorders outside of the traditional provider setting through the use of separate CMEs. We developed an implementation guide that described and built on their efforts.
Looking across the diverse PCMH, SBHC, and CME projects implemented by the 17 States that participated in Category C, we identified seven findings that we believe are especially relevant to AHRQ, CMS, and the States:
- Learning collaboratives were useful for supporting practice transformation when implemented with appropriate clinical expertise and collaboration among State and practice staff.
- The addition of new staff members was viewed as an important factor in practices’ ability to improve QI and PCMH capacity.
- Measuring progress in practice transformation was important for driving QI improvement.
- States recognized the importance of consumer engagement but noted major challenges in accomplishing this goal.
- Demonstration States identified barriers unique to providing high quality care for adolescents, as compared to children generally, and developed strategies to address them.
- Using peers to support caregivers of children with special health care needs provided valuable assistance to families.
- Successful development of CMEs to serve youth with serious behavioral and emotional disorders required a multi-pronged approach.
1. Learning collaboratives were a useful means for supporting practice transformation when implemented with appropriate clinical expertise and collaboration among State and practice staff.
Learning collaboratives were used in the 12 States that had projects focused on helping practices or SBHCs enhance or adopt features of the PCMH model.12 Analysis of data provided by key informants in these States yielded the following findings:
- States discovered that learning collaborative topics need to be relevant to providers. Generating the topic list with substantial provider input generally resulted in engaging meaningful provider participation. Many States solicited frequent feedback from the practices and made midcourse adjustments to collaboratives’ structure and content.
- Maintaining provider engagement and participation in collaboratives is challenging given competing demands for time. States found the following strategies to be useful in recruiting and ensuring the ongoing engagement of practice staff:
- Providing practice stipends to offset some of the costs of missed revenue resulting from taking time off from care delivery to attend leaning collaborative sessions.
- Aligning demonstration efforts with professional development requirements such as offering providers Maintenance of Certification (MOC) credits in exchange for participation in the learning collaboratives.
- Aligning demonstration efforts with external financial incentive programs, such as focusing learning collaboratives on clinical topics covered by Medicaid pay-for-performance measures.
- Offering a combination of traditional didactic instruction and interactive learning activities such as competitions, live demonstrations, and peer networking.
- Offering Web-based learning sessions as alternatives or complements to in-person meetings. Web-based meetings were favored by some providers because they saved on travel time, but it was harder for some States to keep attendees focused and engaged in the Web-based discussions.
- Supplementing learning collaboratives with individualized practice facilitation allowed practices to obtain customized one-on-one assistance and kept practices on task by holding them accountable for learning collaborative “homework.”
- Finding the right mix of participants in a learning collaborative can foster the exchange of information among practices. Sharing experiences was easier when participating practices had similar pre-existing QI and PCMH capacity and patient populations and were working on similar topic areas and measures.
- States felt that tracking practices’ performance on quality measures over time was helpful in identifying areas for improvement and progress achieved, but reporting on these quality measures was sometimes time consuming and challenging for practices.
- To supports practices’ QI efforts, States learned that it was important to use a judicious number of quality measures tightly linked to the topics focused on in learning collaboratives and to not require too-frequent reporting of measure data.
- To build providers’ QI abilities related to the collection, analysis, interpretation, and use of quality measure data, States learned that it was important to provide adequate supports such as learning collaborative sessions, QI materials and tools, and individualized assistance via practice facilitators.
- Although States were often able to effectively engage participating providers in learning collaborative activities, these providers frequently experienced challenges in spreading and sharing information among other practice staff who did not attend meetings or actively participate in activities. This finding was especially true if the learning collaborative participant was not the lead physician in a practice.
2. The addition of new staff members was viewed as an important factor in practices’ ability to improve QI and PCMH capacity.
States used CHIPRA funds to provide participating practices with various kinds of additional staff, such as care coordinators, practice facilitators, and parent partners. These additional staff provided new or enhanced services and support specifically related to enhancing QI and PCMH capacity. Analysis of project reports and data from key informant interviews yielded the following findings:
- Adding new staff members is particularly effective when they have the required technical skills and are integrated into the existing organizational culture.
- Practices that played a substantial role in hiring new staff found it easier to integrate a care coordinator than if the State assigned new staff to a practice because practices could select
- individuals with the credentials, demeanor, and communication style that best fit their needs and culture.
- New staff appeared to be most effective under two conditions: (1) when existing staff, such as clinicians and administrators, valued their contributions and (2) when existing staff understood the role that the newcomers could play in achieving practice transformation and improved quality of care.
- States and practices found that practice facilitators need to limit the number of practices they work with to allow them to provide meaningful individualized support.
- In many cases, States and practices that used demonstration funds to help pay for additional staff were not able to sustain these staff after the grant period.
- Practices that highly valued the contributions of new staff, such as care coordinators, were more likely to seek alternative funding mechanisms to support these positions after the grant period.
3. Measuring progress in practice transformation was important for driving QI improvement.
States recognized the need to assess the extent to which their projects were accomplishing the goals of practice transformation and to use these assessments to shape ongoing efforts.
- States working to enhancing PCMH features of participating practices understood the need to assess the extent to which the practices were adopting these features.
- States tended to select assessment tools based on a variety of factors, including other medical home activities in the State, the target population for the medical home intervention, and familiarity with particular approaches. CMS did not require States to use the same assessment tool.
- Illinois used the National Committee for Quality Assurance (NCQA) PCMH self-assessment tool; Florida, Maine, Massachusetts, Idaho, North Carolina, South Carolina, and Utah used some version of the Medical Home Index (MHI); and Oregon, Alaska, and West Virginia used components from both tools.13
- The States working to enhance the medical home features of SBHCs worked with practice facilitators to monitor quality measure change over time using the Medical Home Index – Revised Short Form (MHI-RSF).14
- The three CME demonstration States used grant funding to hire a contractor to design an evaluation plan that included measuring the key outcomes or results of CME adoption or expansion, as well as measuring care processes to support QI.
4. States recognized the importance of consumer engagement but noted major challenges in accomplishing this goal.
States experimented with methods to engage families and adolescent patients in QI activities, including using youth engagement specialists, family partners, family advisory councils, and community service boards. These activities yielded several key findings:
- Enlisting family caregivers to provide practices with feedback was valuable for identifying consumer perspectives, but challenging.
- Parents had limited time available to contribute feedback due to their multiple and competing priorities.
- Some parents were not accustomed to “advisory” roles and felt uncomfortable providing feedback. The opportunities for parents to provide feedback may not have been optimal given their preferences and abilities (e.g. long surveys, large group meetings, meetings at inconvenient times).
- Some State staff noted that some practices resisted seeking parent feedback because they feared that parents would ask for changes that the practices deemed not feasible (such as offering evening appointments).
- Many practices worked to change features that they believed are important to providing high quality care but that are not noticeable to parents, such as the use of team huddles, improvements to EHRs, and use of patient registries. The low profile of these improvements made it challenging for parents to detect them and provide feedback.
- Enlisting youth participation in project activities carried benefits.
- In SBHCs, youth engagement specialists and youth advisory boards helped to increase students’ and families’ use of the centers.
- Georgia noted that engaging youth and caregivers in designing peer support trainings for youth with social and emotional disorders helped develop a curriculum that was comprehensive, accessible, and relevant.
5. Demonstration States identified barriers unique to providing high quality care for adolescents, as compared to children generally, and developed strategies to address them.
Colorado, New Mexico, North Carolina, and Utah implemented projects that aimed to improve health care for adolescents. These projects identified the following key challenges to providing high quality care to teenagers:
- Many primary care providers do not use adolescent risk screening tools effectively or efficiently.
- Perceived shortages of mental health professionals in some areas have made some primary care providers hesitant to screen for mental health conditions.
- Some primary care providers were uncomfortable discussing sensitive health issues or conditions with teenagers and had difficulty ensuring the confidentiality of information that teens communicate.
In the context of their CHIPRA demonstration projects, the States identified multiple strategies to overcome barriers to providing high quality adolescent health care. These strategies aimed to increase providers’ willingness, frequency, and skill in administering adolescent health risk assessment questionnaires and engaging in private consultations with adolescents regarding responses. Strategies include:
- Training in tips and techniques for engaging adolescents and using screening tools effectively and efficiently.
- Implementing electronic screening methods that assess adolescents’ risks and strengths, collect sensitive information confidentially, and help providers prioritize topics to discuss during office visits.
- Training in State and Federal privacy rules.
- Providing information about local referral resources by developing resource lists or collaborating with local mental health professionals.
- Working to identify reimbursement for health risk screening and anticipatory guidance for adolescents.
- Offering MOC credits for participating in educational training opportunities specifically related to providing high quality care for adolescents.
6. Using peers to support caregivers of children with special health care needs provided valuable assistance to families.
By providing emotional solace, practical tips, and general encouragement, peer support can be helpful to parents who care for children with special needs. Some States tried a provider-based approach, through which providers link parents who volunteer to provide peer support with parents who ask for such support. Some States worked to develop a peer support workforce whose services are reimbursable through Medicaid. These activities provided the following findings:
- Individuals who provided peer support needed comprehensive training on their roles and responsibilities, a clear understanding of the time commitment required, and access to a support system.
- Caregivers who were best suited to provide peer support were those who had experience navigating the health system and caring for their own child with special health care needs. However, they themselves needed support when they were faced with crises involving their own children.
- Educating health care providers about caregiver peer support helped to increase their understanding of and interest in supporting this service.
- In Maryland and Georgia—States that developed a formal mechanism for certifying and funding caregivers to provide peer support—the services were more likely to be sustained than in other States where peer support was funded only by the demonstration grant.
7. Successful development of CMEs to serve youth with serious behavioral and emotional disorders required a multi-pronged approach.
As the lead State, Maryland worked with its two partners (Georgia and Wyoming) to help them develop or improve CMEs. These States worked to identify funding streams, establish organizational infrastructures, and develop training programs. Challenges included competing priorities at the State level, resistance to a new model on the part of established service providers, and a steep learning curve for most stakeholders. The projects in these three States provided the following findings:
- CMEs can use different management structures, depending on existing service infrastructure. In Maryland (which has two CME models), CMEs are managed by an interagency State-level organization and counties; State Medicaid offices run CMEs in Georgia and Wyoming.
- Gaining financial support from multiple child-serving agencies (Medicaid, welfare, juvenile justice, health, and others) was difficult. Agencies were more willing to provide a funding stream for CMEs if they were involved in the design (for example, determining the eligibility criteria).
- When a State decided to use an out-of-State organization for CME services, State staff had to work diligently to build local trust to overcome provider reluctance to refer youth for services.
The goal of the Category D projects was to assess the Children’s EHR Format (Format). The Format was commissioned by CMS and AHRQ to bridge the gap between the functionality present in most EHRs currently available and the functionality that would more optimally support the care of children. The Format, officially released by AHRQ in February 2013, is a set of 695 recommended requirements for EHR data elements, data standards, usability, functionality, and interoperability that need to be present in an EHR system to address health care needs specific to the care of children. (The current version of the Format is available at https://ushik.ahrq.gov/mdr/portals/cehrf?system=cehrf.)
Two demonstration grantees (Pennsylvania and North Carolina) conducted projects in this category but approached the task somewhat differently. Pennsylvania collaborated with EHR vendors and five of the State’s health systems (three children’s hospitals and affiliated ambulatory practice sites, one federally qualified health center, and one small hospital) to implement and test the Format and determine the extent to which EHRs could yield data for calculating the Child Core Set of quality measures. Consequently, their Category D efforts were closely linked to Category A quality measure reporting activities. In contrast, North Carolina used EHR practice facilitators to work with 30 individual practices to identify the degree to which their EHRs already were consistent with the Format and to gather feedback on Format specifications. Facilitators also focused on training staff in these practices on how to use EHR functionalities that already met Format requirements but were not being used.
Evaluation Highlight 10 presents findings related to these States’ experiences assessing the Format. We summarize these findings here:
- Comparing the Children’s EHR Format with existing EHRs was challenging but valuable.
- EHR vendors were reluctant to engage in the demonstration projects, especially because the U.S. Department of Health and Human Services (HHS) has not mandated that vendors adhere to the Format.
- The Format’s complexity overwhelmed providers’ resources to fully understand it.
1. Comparing the Children’s EHR Format with existing EHRs was challenging but valuable.
One of the first steps that States and practices took was to compare their own EHRs functionality with the 695 requirements contained in the model EHR Format. This process produced the following conclusions:
- States and providers generally found the Format to be a major advance in the specification of child-oriented EHR functions. Appreciation for the Format’s thoroughness, however, was diminished by the time-consuming process of comparing the Format with existing EHRs.
- Vendors and practices/health systems often were at odds about whether existing EHRs met Format requirements. It took time to resolve discrepancies—often because practice staff were not aware of their own EHRs functionalities and in some cases because of ambiguity in the Format’s requirement descriptions.
- The comparison process meant that many practices learned more about the capabilities of their EHRs and worked to determine how to make Format requirements applicable to practice workflow.
2. EHR vendors were reluctant to engage in the demonstration projects, especially because HHS has not mandated that vendors adhere to the Format.
EHR vendors’ reluctance stemmed in part from their need to pay attention to other priorities (such as ICD-10 transition, and achieving certification under the CMS’ EHR Incentive Program). They also saw little reason to voluntarily make their products Format-compliant or to meet the needs for children’s health IT more generally. Overall, lack of vendor participation impeded progress in Category D activities in both States.
- North Carolina found that vendors needed clinical and informatics guidance to incorporate the Format requirements in a way that supports the State’s desired improvement in children’s health care.
- When EHR facilitators and health systems got the attention of vendors, their assessment of the Format helped them to identify and discuss providers’ expectations for a child-oriented EHR.
3. The Format’s complexity overwhelmed providers’ resources to fully understand it.
Many stakeholders suggested that it would be more fruitful to have a Format that includes a narrower subset of EHR requirements that align closely with current QI priorities or are limited to a subset of critical/core requirements. To that end, AHRQ has convened two workgroups to further evaluate the Format and its potential uses; an abridged version including only the critical and core requirements is now available.15
CMS guidelines for Category E offered States the opportunity to implement additional strategies aimed at improving health care delivery, quality, or access. The activities could relate to one of the CMS key program focus areas listed in the grant solicitation or to another area of the grantee’s choice, provided it complemented the activities performed under another grant category. Because the guidelines for this category were less specific than for Categories A through D, States addressed a range of topics; 11 States fielded Category E projects:
Colorado and New Mexico worked with selected SBHCs in their States to increase youth engagement in their health care. As part of this project, the States developed a Youth Engagement in Health Services (YEHS) survey for high school and middle school students. In both States, participating SBHCs used tablet computers to administer the survey to youth. In Colorado, SBHCs will not be using the survey after the demonstration period. New Mexico integrated about half of the YEHS questions into its existing Student Satisfaction Survey, which all SBHCs that receive State funding are required to administer.
Florida and Illinois established stakeholder workgroups to focus on improving the quality of perinatal and early childhood care for children enrolled in Medicaid and CHIP. Florida provided CHIPRA dollars to the University of South Florida to promote the Florida Perinatal Quality Collaborative (FPQC). During the later years of the project, the collaborative met every 6 months, bringing together hospitals and other perinatal stakeholders to improve the quality of care for mothers and newborns. In its first QI project, the FPQC focused on reducing elective pre-term births through delivery room interventions. The project was viewed a success; rates of elective scheduled early-term deliveries decreased among the 26 participating hospitals.16 The FPQC’s partners (March of Dimes, the Hospital Engagement Network, and the Blue Cross Foundation) may sustain its work after the grant period.
The Illinois Perinatal Quality Collaborative (IPQC) began with seed funds from the CHIPRA grant and now has a membership of more than 100 hospitals. State demonstration staff also were on the leadership team of the IPQC. Activities have included several statewide conferences, an early elective delivery (EED) initiative involving 49 hospitals (41 have achieved the goal of reducing their EED rates to less than 5 percent), a neonatal nutrition initiative involving 18 neonatal intensive care units (NICUs), an initiative involving 106 hospitals to improve accuracy of 17 key birth certificate variables, and an initiative involving 28 NICUs to improve the quality of care in the first hour after a child’s birth. Although CHIPRA funding supported the creation of the collaborative, the group has also received funds from other sources, including March of Dimes, Illinois Department of Healthcare and Family Services, the Illinois Hospital Association, and a Centers for Disease Control and Prevention (CDC) grant, and will continue operations after the CHIPRA demonstration ends.
Maryland, Georgia, and Wyoming used Category E funding to support their Category C work to develop or expand CMEs for youth with serious emotional and behavioral health needs. We note each State’s specific activities conducted under their Category E projects and their sustainment status:
- Maryland surveyed and held focus groups with behavioral health providers, families, and youth on crisis response and family support services to understand families’ experiences related to these services and identify gaps in service availability. Based on these discussions, the State developed a report outlining best practices for crisis response and disseminated it to local organizations providing these services. The State also determined an appropriate
- reimbursement rate for crisis and family support services and included these services in a new State plan amendment.
- Georgia established a network of certified family peer support specialists to develop related training programs and to obtain Medicaid reimbursement for the services provided by these specialists. The State was able to institute a training and certification program for family and youth peer support specialists that will continue after the grant period through separate funding mechanisms.
- Wyoming used CHIPRA funds to support the Too Young, Too Much, Too Many Program, which tracks patterns of psychotropic medication prescribing in Medicaid, addresses misuse by physicians, and determines whether youth need additional intervention. The State renewed and expanded its contract with their pharmacy benefit manager to continue this program after the grant period.
Massachusetts formed the Children’s Health Quality Coalition, a 60-member multi-stakeholder group representing clinicians, payers, State and local government agencies, family advocacy groups, and individual parents and families. During the demonstration, the coalition reviewed child health quality measure reports to analyze gaps in care and identify priority areas, convened task forces and workgroups that advanced its agenda in priority areas, and developed a Web site with resources to help practices and families improve the quality of care. Going forward, the coalition will be incorporated into the Massachusetts Health Quality Partners’ coalition agenda and initiatives. The Massachusetts Children’s Health Quality Coalition’s Web site17 remains live, and content has been updated to reflect its new organizational home.
Utah and Idaho, with support from the National Improvement Partnership Network (NIPN), established or strengthened State-based pediatric QI networks to support continued development of QI initiatives for children.18
- Idaho established the Idaho Health and Wellness Collaborative for Children, which will be housed at the St. Luke’s Children’s Hospital.
- In Utah, the CHIPRA project team was closely linked to an existing improvement partnership network (Utah Pediatric Partnership to Improve Healthcare Quality, or UPIC) that provided intellectual leadership for the State’s demonstration grant. After the grant period, UPIC will continue to seek internal and external support for QI initiatives for children in Utah—efforts that will be informed by experiences and relationships developed through the grant.
Vermont used Category E funding to contract with NIPN to provide TA to improvement partnerships (IPs) in more than 20 States, develop core measure sets, and hold both annual operations trainings attended by representatives from IPs nationwide and monthly “all-site” conference calls. NIPN is run through the Vermont Child Health Improvement Project based at the University of Vermont’s College of Medicine.
In addition to the category-specific findings, Evaluation Highlights 4 and 6 and a manuscript on sustainability include findings that cut across the five demonstration categories.19, 20 Key findings include:
- To ensure that child health care remains an important topic on State health policy agendas, demonstration States leveraged the CHIPRA grant to develop or strengthen connections to key policymakers.
- Of the project elements that were in place at the end of the fifth year of the demonstration, more than half were, or were highly likely to be, sustained after the grant period was over.
- Demonstration grants allowed States to gain substantial experience, knowledge, and partnerships related to QI for children in Medicaid and CHIP—a resource we refer to as “intellectual capital.”
1. To ensure that child health care remains an important topic on State health policy agendas, demonstration States leveraged the CHIPRA grant to develop or strengthen connections to key policymakers.
Demonstration States reported that the presence of a CHIPRA grant sent State policymakers a signal about the importance of improving the quality of care for children and adolescents. The prestige of winning the grant lent legitimacy to staff efforts to improve the quality of care for children. In many States, it also allowed key staff to participate in policy discussions and supported them in including children in the broader health reform activities occurring in the State. Project staff in several States also learned how to leverage data and analysis generated through the CHIPRA quality demonstration to engage policymakers, raise awareness about pediatric health issues, and suggest potential solutions. For example, demonstration staff in Maryland used behavioral health claims data to identify gaps in the availability of crisis response tools throughout the State and made recommendations for a redesign of the State’s crisis response system.
The strategies that States used to elevate children on health policy agendas reflected the political and administrative context in each State. Common to all of these efforts, however, were the new connections formed among State officials, policymakers, providers, provider associations, private-sector payers and insurance plans, patient representatives, staff of various State and Federal reform initiatives and demonstrations, and other key stakeholders.
In addition, States aligned their efforts with—and used their CHIPRA quality demonstration project experiences to directly inform—broader Federal and State health reform initiatives. For example, States most commonly linked their efforts to existing statewide reform initiatives, particularly those related to PCMH implementation.
2. Of the project elements that were in place at the end of the fifth year of the demonstration, more than half were, or were highly likely to be, sustained after the grant period was over.
During the demonstration, States implemented projects that included multiple elements. For example, some State projects aimed to support PCMH transformation, and these projects typically included separate elements such as learning collaboratives, practice facilitation, financial and labor resources provided to participating practices, and health care training or certification programs. We defined each of these activities as a separate element, because some were sustained and others were not. Using this definition, States implemented 115 elements by the end of the grant program’s fifth year. Our analysis of the sustainment of project elements yielded the following findings:
- Across all States, 57 percent of elements were or were highly likely to be sustained. The percentage of sustained elements varied by topic, with elements related to patient engagement being least likely to be sustained and elements related to practice facilitation and quality reporting being most likely to be sustained.
- Seventeen demonstration States implemented 40 elements used singly or in combination for service delivery transformation and sustained just over half of these elements. Some types of elements within this topic area were more likely to be sustained than others.
- States sustained 77 percent of their facilitation programs, compared with 60 percent of their training and certification elements; 42 percent of their learning collaboratives; and 20 percent of their programs to provide payments to practices for participating in QI activities.
- Eight States developed strategies for reporting quality measures to CMS, and all of the States sustained or hoped to sustain those elements after the grant period. Consistent with our findings related to challenges in developing quality reports, States were somewhat less successful in sustaining program elements related to quality measure reports for stakeholders within the State or to payments and technical assistance to providers to produce or use reports on quality measures
- Twelve States implemented a diverse range of elements related to health IT that involved providing TA to improve data from EHRs, achieving data system interoperability, and establishing Web sites with information for providers or families; about half of these elements were sustained. Although demonstration States encountered challenges in health IT-related projects, the sustainment of nearly half of them implies States are committed to using health IT as a platform for improving quality of care generally.
- States planned to spread more than half of sustained elements following the demonstration. For elements related to service transformation, spreading the program elements typically involved increasing the number of practices that States were reaching through learning collaboratives or practice facilitation. States also spread concepts and approaches from the demonstration to QI programs in the adult health realm.
- States implemented about one-quarter of all sustained elements statewide as part of the demonstration and therefore had already maximized the spread of these elements. For example, one State developed and is highly likely to sustain a new administrative infrastructure to analyze data from multiple child-serving agencies—an element that was designed to be spread statewide from its inception.
- Even though many States had contracted with evaluation teams to conduct various types of monitoring and evaluation studies, States reported few opportunities to make sustainment decisions based on empirical data.
3. Demonstration grants allowed States to gain substantial experience, knowledge, and partnerships related to QI for children in Medicaid and CHIP—a resource we refer to as “intellectual capital.
Demonstration staff in all 18 States garnered a great deal of experience through partnerships with officials, providers, and quality specialists in their own and other States. The intellectual capital acquired during the demonstration will be sustained in varying forms in 13 States. For example:
- Six States will build on demonstration activities through new scope of work provisions in pre-existing contracts with State universities.
- In five States, key State staff either stayed in their positions or moved to other positions in the Medicaid agency and remained closely involved in QI activities. In contrast, key staff that provided leadership for the demonstration grant in five other States will not be supported after the grant period.
- New entities were developed in two States; one developed a new statewide partnership to continue QI activities for children; the other State will establish a new administrative unit within the Medicaid agency to support QI learning collaboratives and related initiatives begun under the demonstration grant.