Prediction model reporting (lite) — v1.0.0

Released: 2026-02-02 · Hash: sha256:821ec1dfd826d13c8b564574d027f8affdb49420699f2fcfcaea23506dedb085

JSON View JSON (API) Open in Builder
Title & Abstract
CriterionText
must PM-01Make it explicit that this is a prediction model study (development and/or validation), and name the target population and outcome.
should PM-02Abstract summarizes data source, participants, outcome, predictors, modeling approach, validation strategy, and key performance results.
Introduction
CriterionText
must PM-10Describe the clinical/scientific context, intended use of the model (diagnostic vs prognostic), and the decision it aims to support.
should PM-11Clarify whether the work is model development, external validation, model updating, or a combination, and whether an analysis plan was pre-specified.
Methods
CriterionText
must PM-20Describe the data source(s), study design, setting, and relevant dates (accrual and follow-up), including number and type of sites.
must PM-21Define participant eligibility and how participants were selected, including treatments/exposures relevant to prediction if applicable.
must PM-22Define the outcome, including how and when it was assessed, time horizon, and any blinding of outcome assessment.
must PM-23Define candidate predictors, how and when they were measured, and any blinding to outcome when predictors were assessed.
should PM-24Report sample size with emphasis on number of outcome events, and justify model complexity (e.g., events-per-parameter or regularization rationale).
must PM-25
Describe how missing data were handled for predictors and outcomes (e.g., complete-case, imputation, modeling missingness).
must PM-26Describe model-building procedures: predictor selection, transformations, interaction terms, regularization, and internal validation/resampling if used.
should PM-27For machine learning workflows, describe preprocessing, feature engineering, hyperparameter tuning, and how the final model was selected.
must PM-28Specify performance measures (discrimination and calibration at minimum) and how uncertainty was quantified (e.g., bootstrap, CI).
Results
CriterionText
must PM-30Report participant flow and characteristics, including missingness and outcome event counts in development and validation datasets.
must PM-31Describe the final model (predictors included) and provide sufficient specification to allow use (e.g., coefficients/intercept or model artifact).
must PM-32Report model performance results, including calibration assessment (plot or calibration measures) and discrimination metrics, with uncertainty.
may PM-33If external validation or updating was performed, report performance by dataset/site and describe any recalibration or model updating steps.
Discussion
CriterionText
must PM-40Discuss limitations, potential biases, and how the data and setting affect applicability of the model.
should PM-41Discuss implications for practice and next steps (e.g., external validation, impact analysis, monitoring).
Other
CriterionText
must PM-50
Provide access to the model and supporting materials (code, model file, or full equation) and state how it should be used in practice.
should PM-51
Provide protocol/registration information (if available) and disclose funding, competing interests, and data/code availability.