Prediction model reporting (lite) — v1.0.0
Released: 2026-02-02 · Hash: sha256:821ec1dfd826d13c8b564574d027f8affdb49420699f2fcfcaea23506dedb085
Title & Abstract
| Criterion | Text |
|---|---|
| must PM-01 | Make it explicit that this is a prediction model study (development and/or validation), and name the target population and outcome. |
| should PM-02 | Abstract summarizes data source, participants, outcome, predictors, modeling approach, validation strategy, and key performance results. |
Introduction
| Criterion | Text |
|---|---|
| must PM-10 | Describe the clinical/scientific context, intended use of the model (diagnostic vs prognostic), and the decision it aims to support. |
| should PM-11 | Clarify whether the work is model development, external validation, model updating, or a combination, and whether an analysis plan was pre-specified. |
Methods
| Criterion | Text |
|---|---|
| must PM-20 | Describe the data source(s), study design, setting, and relevant dates (accrual and follow-up), including number and type of sites. |
| must PM-21 | Define participant eligibility and how participants were selected, including treatments/exposures relevant to prediction if applicable. |
| must PM-22 | Define the outcome, including how and when it was assessed, time horizon, and any blinding of outcome assessment. |
| must PM-23 | Define candidate predictors, how and when they were measured, and any blinding to outcome when predictors were assessed. |
| should PM-24 | Report sample size with emphasis on number of outcome events, and justify model complexity (e.g., events-per-parameter or regularization rationale). |
| must PM-25 | Describe how missing data were handled for predictors and outcomes (e.g., complete-case, imputation, modeling missingness). |
| must PM-26 | Describe model-building procedures: predictor selection, transformations, interaction terms, regularization, and internal validation/resampling if used. |
| should PM-27 | For machine learning workflows, describe preprocessing, feature engineering, hyperparameter tuning, and how the final model was selected. |
| must PM-28 | Specify performance measures (discrimination and calibration at minimum) and how uncertainty was quantified (e.g., bootstrap, CI). |
Results
| Criterion | Text |
|---|---|
| must PM-30 | Report participant flow and characteristics, including missingness and outcome event counts in development and validation datasets. |
| must PM-31 | Describe the final model (predictors included) and provide sufficient specification to allow use (e.g., coefficients/intercept or model artifact). |
| must PM-32 | Report model performance results, including calibration assessment (plot or calibration measures) and discrimination metrics, with uncertainty. |
| may PM-33 | If external validation or updating was performed, report performance by dataset/site and describe any recalibration or model updating steps. |
Discussion
| Criterion | Text |
|---|---|
| must PM-40 | Discuss limitations, potential biases, and how the data and setting affect applicability of the model. |
| should PM-41 | Discuss implications for practice and next steps (e.g., external validation, impact analysis, monitoring). |
Other
| Criterion | Text |
|---|---|
| must PM-50 | Provide access to the model and supporting materials (code, model file, or full equation) and state how it should be used in practice. |
| should PM-51 | Provide protocol/registration information (if available) and disclose funding, competing interests, and data/code availability. |