Event Schedule

Below is a limited overview of future events (e.g. courses, invited talks, workshops, seminars).

The use of prognostic scores for causal inference with general treatment regimes
Speaker: Thomas Debray
  • 12:00 PM TO 12:30 PM
  • 39th Annual Conference of the International Society for Clinical Biostatistics (ISCB)
  • Melbourne, Australia

In non-randomised studies, inferring causal effects requires appropriate methods for addressing confounding bias. Although it is common to adopt propensity score analysis to this purpose, prognostic score analysis has recently been proposed as an alternative strategy. Whilst both approaches were originally introduced to estimate causal effects for binary interventions, the theory of propensity score has since been extended to the case of general treatment regimes. Indeed, many treatments are not assigned in a binary fashion, and require a certain extent of dosing. Hence, researchers may often be interested in estimating treatment effects across multiple exposures. To the best of our knowledge, the prognostic score analysis has not been yet generalised to this case. In this article, we describe the theory of prognostic scores for causal inference with general treatment regimes. Our methods can be applied to compare multiple treatments using non-randomised data, a topic of great relevance in contemporary evaluations of clinical interventions. We propose estimators for the average treatment effects in different populations of interest, the validity of which is assessed through a series of simulations. Finally, we present an illustrative case in which we estimate the effect of the delay to Aspirin administration on a composite outcome of death or dependence at 6 months in stroke patients.

#
On the aggregation of historical prognostic scores for causal inference
Speaker: Thomas Debray
  • 39th Annual Conference of the International Society for Clinical Biostatistics (ISCB)
  • Melbourne, Australia

Randomised clinical trials (RCTs) are generally regarded as the gold standard to assess treatment effects. However, because real world evidence on drug effectiveness and safety usually involves non-randomised study designs, statistical methods to adjust for confounding biases are often needed. In the last decade, prognostic score (PGS) analysis has been proposed as a new method to adjust for confounding bias, which aims to restore balance across the different treatment groups by identifying subjects with a similar prognosis for a given reference treatment. This requires the development of a multivariable prognostic model in the control arm of the study sample, which is then extrapolated to the different treatment arms. Because PGS strongly relies on the absence of hidden bias (i.e. no missing confounders), it is recommended to develop the prognostic models in large cohorts of control subjects in order to adjust for many covariates. When data are sparse, prognostic models can be obtained from the published literature. We extend a previously proposed method for prediction model aggregation to be used in non-randomised treatment studies to obtain valid inferences on treatment effectiveness. By aggregating these models, it becomes possible to improve the generalizability of same-sample PGS, when limited individual participant data are available for the target control population. We conducted extensive simulations to assess its the usefulness of model aggregation compared with other methods for confounding adjustment, when estimating treatment effects. We show that aggregating existing prognostic scores into a 'meta-score' is robust to misspecification, even when elementary scores wrongfully omit confounders or focus on different outcomes than the treatment effectiveness is targeting. We illustrate our methods in a setting of treatments for asthma.

#
A framework for meta-analysis of prediction model studies with binary and time-to-event outcomes
Speaker: Thomas Debray
  • 3:00 PM TO 3:15 PM
  • 25th Cochrane Colloquium
  • Edinburgh, The United Kingdom

Background: It is widely recommended that any developed prediction model - diagnostic or prognostic - is validated externally in terms of its predictive performance measured by calibration and discrimination. When multiple validations have been performed, a systematic review followed by a formal meta-analysis helps to summarize overall performance across multiple settings, and reveals under which circumstances the model performs suboptimal (alternative poorer) and may need adjustment.

Objectives: To discuss how to undertake meta-analysis of the performance of prediction models with either a binary or a time-to-event outcome.

Methods: We address how to deal with incomplete availability of study-specific results (performance estimates and their precision), and how to produce summary estimates of the c -statistic, the observed:expected ratio and the calibration slope. Furthermore, we discuss the implementation of frequentist and Bayesian meta-analysis methods, and propose novel empirically based prior distributions to improve estimation of between-study heterogeneity in small samples. Finally, we illustrate all methods using two examples: a meta-analysis of the predictive performance of EuroSCORE II and of the Framingham Risk Score. All examples and meta-analysis models have been implemented in our newly developed open source R package 'metamisc'.

Results: Frequentist and Bayesian meta-analysis methods often yielded similar summary estimates of prediction model performance. However, estimates of between-study heterogeneity and derived prediction intervals appeared more adequate when we applied Bayesian estimation methods.

Conclusions: Our empirical examples demonstrate that meta-analysis of prediction models is a feasible strategy despite the complex nature of corresponding studies. As developed prediction models are being validated increasingly often, and as the reporting quality is steadily improving, we anticipate that evidence synthesis of prediction model studies will become more commonplace in the near future. The R package metamisc is designed to facilitate this endeavor, and will be updated as new methods become available.

Patient or healthcare consumer involvement: The identification of relevant statistical methods was informed by previous experiences with systematic reviews of prognosis studies.

#
Systematic reviews of prognostic studies III: meta-analytical approaches in systematic reviews of prognostic studies
Speaker: Thomas Debray
  • 11:00 AM TO 12:30 PM
  • 25th Cochrane Colloquium
  • Edinburgh, The United Kingdom

Background: Prediction models are commonly developed and validated for predicting the presence (diagnostic) or future occurrence (prognostic) of a particular outcome. Prediction models have become abundant in the literature. Many models have been validated in numerous different studies/publications. In addition, numerous studies have investigated the (added) value of a prognostic factor/predictor/biomarker to existing predictors. In both situations, aggregating such data is important for making inferences on the predictive performance of a specific model or predictor/marker. Meta-analytical approaches for both situations have recently been developed.

Objectives: This workshop introduces participants to statistical methods for meta-analysis in systematic reviews of prognosis studies. We address both meta-analysis of the accuracy of a prognostic model and of the (added) predictive value of a prognostic factor. We discuss the opportunities/challenges of the statistical methods and common software packages.

Description: In this workshop we illustrate these statistical approaches and how to combine - quantitatively - results from published studies on the predictive accuracy of a prognostic model or (added) predictive accuracy of a prognostic factor. We illustrate this with various empirical examples.

#
On the aggregation of historical prognostic scores for causal inference
Speaker: Thomas Debray
  • 2:40 PM TO 2:55 PM
  • 25th Cochrane Colloquium
  • Edinburgh, The United Kingdom

Background: Comparative effectiveness research in non-randomized studies is often prone to various sources of confounding. Recently, prognostic score analysis has been proposed to address this issue, which aims to achieve prognostic balance across the different treatment groups. Although it is common to use the non-randomized data at hand to develop the necessary prognostic scores, this strategy is problematic when sample sizes are relatively small. It has previously been demonstrated that prognostic scores from historical cohorts may actually outperform internally developed prognostic scores for causal inference, and that their accuracy can further be improved through evidence synthesis.

Objectives: To present new meta-analysis methods for causal inference in non-randomized data sources. Hereto, we consider the aggregation of multiple prognostic scores derived from historical cohorts.

Methods: We extend existing methodology for causal inference and meta-analysis of prediction models, and propose new methods to derive comparative treatment effects from non-randomized studies. We conducted an extensive simulation study based on a real clinical dataset comparing different treatment strategies for asthma control. We aggregated previously identified prognostic scores for predicting exacerbations of asthma, and used the resulting model to estimate the average treatment effect in the overall (ATE) and in the treated (ATT) population of various simulated datasets. We compared various implementation strategies by assessing the bias and mean squared error of the estimated ATE and ATT, and the ratio of the estimated standard errors to the empirical standard deviations.

Conclusions: Initial simulation study results suggest that aggregation of historical prognostic scores may substantially improve the estimation of comparative treatment effects in non-randomized data sources.

Patient or healthcare consumer involvement: A clinician was involved in the provision of relevant patient-level data, and the interpretation of comparative treatment effect estimates.

#