National Healthcare Quality and Disparities Report
Latest available findings on quality of and access to health care
Data
- Data Infographics
- Data Visualizations
- Data Tools
- Data Innovations
- All-Payer Claims Database
- Healthcare Cost and Utilization Project (HCUP)
- Medical Expenditure Panel Survey (MEPS)
- AHRQ Quality Indicator Tools for Data Analytics
- State Snapshots
- United States Health Information Knowledgebase (USHIK)
- Data Sources Available from AHRQ
Search All Research Studies
AHRQ Research Studies
Sign up: AHRQ Research Studies Email updates
Research Studies is a compilation of published research articles funded by AHRQ or authored by AHRQ researchers.
Results
1 to 1 of 1 Research Studies DisplayedColey RY, Liao Q, Simon N
Empirical evaluation of internal validation methods for prediction in large-scale clinical data with rare-event outcomes: a case study in suicide risk prediction.
Clinical prediction models for uncommon outcomes, such as suicide, psychiatric hospitalizations, and opioid overdose, are garnering increased attention. Precise model validation is essential for choosing the appropriate model and deciding on its application. Split-sample estimation and validation of clinical prediction models, where data are divided into training and testing sets, may decrease predictive accuracy and precision. Utilizing the entire dataset for estimation and validation improves the sample size for both processes, but overfitting or optimism must be accounted for. The researchers compared split-sample and whole-sample approaches for estimating and validating a suicide prediction model. The study found that both the split-sample and whole-sample prediction models demonstrated similar prospective performance. Performance estimates assessed in the testing set for the split-sample model and through cross-validation for the whole-sample model correctly represented prospective performance. Validation of the whole-sample model using bootstrap optimism correction overestimated prospective performance. The researchers concluded that although previous studies have validated the bootstrap optimism correction for parametric models in small samples, this method did not accurately validate the performance of a rare-event prediction model estimated with random forests in a large clinical dataset. Cross-validation of prediction models estimated using all available data offers precise independent validation while maximizing sample size.
AHRQ-funded; HS026369.
Citation: Coley RY, Liao Q, Simon N .
Empirical evaluation of internal validation methods for prediction in large-scale clinical data with rare-event outcomes: a case study in suicide risk prediction.
BMC Med Res Methodol 2023 Feb 1; 23(1):33. doi: 10.1186/s12874-023-01844-5..
Keywords: Research Methodologies, Risk