Background Report on 2013 Retirement of Measures from the Child Core Set



The process for considering retirement of CCS measures comprised a number of steps undertaken from May through October 2013. These included identification of measures to be considered for retirement, creation of a 2013 SNAC (go to Appendix A for list of members), identification of criteria by which to assess measures for retirement, identification of information sources to be used to provide data relevant to the criteria, data searches and analysis, provision of the information to the 2013 SNAC, and two rounds of scoring measures against criteria and voting on retirement. This process is described in more detail here.

Measure Selection  

CMS selected 20 quality measures from the initial CCS to be considered for potential retirement. A listing of these measures and their stewards is provided in Appendix B. Of the measures in the CCS as of January 2013 that were not included in the 20 to be considered for retirement, 3 had just been added,7 1 had been retired in January 2013 because it proved impossible to collect (avoidance of systemic antibiotic use for otitis media with effusion), 1 was required under a separate provision of CHIPRA (Child Consumer Assessment of Healthcare Providers and Systems [CAHPS®] 4.0 for Medicaid), and 2 were collected as part of mandatory reporting under Early and Periodic Screening, Diagnosis, and Treatment (EPSDT) provisions (preventive and treatment dental services). 

2013 SNAC Deliberations

First SNAC meeting. The first 2013 SNAC meeting was held June 5, 2013. During this meeting, AHRQ and CMS discussed with SNAC members (1) the methods that would be used to make recommendations, (2) a proposed set of criteria for assessing measures using a modified Delphi method, and (3) a proposed relative weighting of those criteria.  In a subsequent exchange of emails and postings to a password-protected Extranet site, SNAC members confirmed the final criteria and an analytic approach to be guided by AHRQ to provide information relevant to the criteria.  Between the first and second SNAC meetings, AHRQ provided draft "measure reports" containing the agreed-to information for each measure. 

Second SNAC meeting. A second SNAC meeting was held September 6, 2013. During this meeting, SNAC members were given an opportunity to provide feedback on the draft measure reports.  In addition, the meeting discussion focused on initial views of the strengths and weaknesses of each measure. Each of these meetings was held as a video-enabled webinar. SNAC members also had the opportunity to communicate with AHRQ between meetings using a password-protected Extranet site. The draft analytic reports included basic measure information and, where available, information on the importance of the measure (prevalence/incidence and cost/utilization related to the measure topic, as well as recent State and health plan performance on the measure), scientific acceptability (validity and reliabilitya of the measure), feasibility (number of States reporting, data source, and requests for technical assistance), and usability (evidence of the ability to improve performance on the measure). 

During the period September 6–23, 2013, AHRQ, CMS, and RTI revised the draft analytic reports as needed. In addition, at the request of the SNAC, AHRQ developed a table of findings on a restricted number of key subcriteria. Data sources for the analytic reports varied by topic area because AHRQ, CMS, and the CHIPRA Coordinating and Technical Assistance Center (CCTAC) focused on readily available data from sources such as the National Survey of Children's Health, the Healthcare Cost and Utilization Project, the Medical Expenditure Panel Survey, State Medicaid and CHIP program submissions to CMS, and data on the reliability of most Healthcare Effectiveness Data and Information Set (HEDIS) measures in the CCS (provided by the National Committee for Quality Assurance [NCQA] for use by the SNAC only). CCTAC analyzed the 2009 Medicaid Analytic eXtract (MAX) data to obtain prevalence/incidence and cost/utilization information. The aim for each report was to provide information specific to the Medicaid/CHIP population by race/ethnicity, socioeconomic status, and special health care need, wherever possible. Measure-specific definitions for prevalence/incidence and cost/utilization and the sources used are presented in Appendix C. An example of the final measure report template, including all possible content areas, is provided in Appendix D

First round of SNAC scoring and voting. In each round, SNAC members were asked to provide a score between 1 and 9 for each measure for each of the major criteria—importance, scientific acceptability, feasibility, and usability—with 1 representing the lowest possible score (i.e., does not meet the criterion) and 9 representing the highest possible score (i.e., does meet the criterion). In addition to scoring each measure by using specific criteria, SNAC members were also to vote on whether or not to retire each measure. Finally, SNAC members were given the opportunity to provide additional comments to explain the rationale behind their scores and retirement votes. Upon receiving scores on each measure criterion, CCTAC calculated a total score as the average score for the four criteria, with each criterion weighted equally. A total score was not calculated for SNAC members who did not score all four criteria. A higher median score is one indication of the group's assessment of whether the measure should be retained in the CCS rather than retired. Responses to the retirement question were not necessarily dependent on the numeric scores but were based on the overall judgment of the SNAC members after considering all the pertinent information provided for each measure. 

Second round of SNAC scoring and voting. Before the second (final) round of scoring, SNAC members received a summary of the first scoring round (in which 15 SNAC members participated). The scoring summary included (1) the distribution of scores on each criterion, (2) the distribution of total scores for each measure, (3) the count of individuals who recommended retirement, and (4) the comments made on each criterion and on the measure overall. SNAC members also received updated measure reports that corrected any inaccuracies or misinterpretations in the draft reports and, where possible, incorporated SNAC suggestions and provided additional measure information. The second round of SNAC scoring was completed on October 28, 2013. 

a. Reliability data were provided only for National Committee for Quality Assurance (NCQA) measures. Validity information was not provided for 14 of the measures, and only limited validity information was provided for the remaining six measures.

Return to Contents
Proceed to Next Section

Page last reviewed February 2014
Page originally created February 2014
Internet Citation: Methods. Content last reviewed February 2014. Agency for Healthcare Research and Quality, Rockville, MD.