Reliability Estimation of Individual Multi-target Regression Predictions

Reliability Estimation of Individual Multi-target Regression Predictions

Abstract: To estimate the quality of the induced predictive model we generally use measures of averaged prediction accuracy, such as the relative mean squared error on test data. Such evaluation fails to provide local information about reliability of individual predictions, which can be important in risk-sensitive fields (medicine, finance, industry etc.). Related work presented several ways for computing individual prediction reliability estimates for single-target regression models, but has not considered their use with multi-target regression models that predict a vector of independent target variables. In this paper we adapt the existing single-target reliability estimates to multi-target models. In this way we try to design reliability estimates, which can estimate the prediction errors without knowing true prediction errors, for multi-target regression algorithms, as well. We approach this in two ways: by aggregating reliability estimates for individual target components, and by generalizing the existing reliability estimates to higher number of dimensions. The results revealed favorable performance of the reliability estimates that are based on bagging variance and local cross-validation approaches. The results are consistent with the related work in single-target reliability estimates and provide a support for multi-target decision making.

Conclusion: In the paper we proposed several approaches for estimating the reliabilities of individual multi-target regression predictions. The aggregated variants (AM, l2 and +) produce a single-valued estimate which is preferable for interpretation and comparison. The last variant (+) is a direct generalization of the singletarget estimators from the related work. Our evaluation showed that best results were achieved using the BAGV and the LCV reliability estimates regardless the estimate variant. This complies with the related work on the single-target predictions, where these two estimates also performed well. Although all of the proposed variants achieve comparable results, our proposed generalization of existing methods (+) is still the preferred variant due to its lower computational complexity (as estimates are only calculated once for all of the target attributes) and the solid theoretical background. In our further work we intend to additionally evaluate other reliability estimates in combination with several other regression models. We also plan to test the adaptation of the proposed methods to multi-target classification. Reliability estimation of individual predictions offers many advantages especially when making decisions in highly sensitive environment. Our work provides an effective support for model-independent multi-target regression.

Reliability Estimation of Individual Multi-target Regression Predictions

Predicting Risk of Suicide Attempts Over Time Through Machine Learning

Abstract: Traditional approaches to the prediction of suicide attempts have limited the accuracy and scale of risk detection for these dangerous behaviors. We sought to overcome these limitations by applying machine learning to electronic health records within a large medical database. Participants were 5,167 adult patients with a claim code for self-injury (i.e., ICD-9, E95x); expert review of records determined that 3,250 patients made a suicide attempt (i.e., cases), and 1,917 patients engaged in self-injury that was nonsuicidal, accidental, or nonverifiable (i.e., controls). We developed machine learning algorithms that accurately predicted future suicide attempts (AUC = 0.84, precision = 0.79, recall = 0.95, Brier score = 0.14). Moreover, accuracy improved from 720 days to 7 days before the suicide attempt, and predictor importance shifted across time. These findings represent a step toward accurate and scalable risk detection and provide insight into how suicide attempt risk shifts over time.
Discussion: Accurate and scalable methods of suicide attempt risk detection are an important part of efforts to reduce these behaviors on a large scale. In an effort to contribute to the development of one such method, we applied ML to EHR data. Our major findings included the following: (a) this method produced more accurate prediction of suicide attempts than traditional methods (e.g., ML produced AUCs in the 0.80s, traditional regression in the 0.50s and 0.60s, which also demonstrated wider confidence intervals/greater variance than the ML approach), with notable lead time (up to 2 years) prior to attempts; (b) model performance steadily improved as the suicide attempt become more imminent; (c) model performance was similar for single and repeat attempters; and (d) predictor importance within algorithms shifted over time. Here, we discuss each of these findings in more detail. ML models performed with acceptable accuracy using structured EHR data mapped to known clinical terminologies like CMS-HCC and ATC, Level 5. Recent metaanalyses indicate that traditional suicide risk detection approaches produce near-chance accuracy (Franklin et al., 2017), and a traditional method—multiple logistic regression—produced similarly poor accuracy in the present study. ML to predict suicide attempts obtained greater discriminative accuracy than typically obtained with traditional approaches like logistic regression (i.e., AUC = 0.76; Kessler, Stein, et al., 2016). The present study extends this pioneering work with its use of a larger comparison group of self-injurers without suicidal intent, ability to display a temporally variant risk profile over time, scalability of this approach to any EHR data adhering to accepted clinical data standards, and performance in terms of discriminative accuracy (AUC = 0.84, 95% CI [0.83, 0.85]), precision recall, and calibration (see Table 1). This approach can be readily applied within large medical databases to provide constantly updating risk assessments for millions of patients based on an outcome derived from expert review. Although short-term risk and shifts in risk over time are often noted in clinical lore, risk guidelines, and suicide theories (e.g., O’Connor, 2011; Rudd et  al., 2006; Wenzel & Beck, 2008), few studies have directly investigated these issues. The present study examined risk at several intervals from 720 to 7 days and found that model performance improved as suicide attempts became more imminent. This finding was consistent with hypotheses; however, two aspects of the present study should be considered when interpreting this finding. First, this pattern was confounded by the fact that more data were available naturally over time; predictive modeling efforts at point of care should take advantage of this fact to improve model performance as additional data are collected. Second, due to the limitations of EHR data, we were unable to directly integrate information about potential precipitating events (e.g., job loss) or data not recorded in routine clinical care into the present models. Such information may have further improved short-term prediction of suicide attempts. Future studies should build on the present findings to further elucidate how risk changes as suicide attempts become more imminent.
Predicting Risk of Suicide Attempts Over Time Through Machine Learning

Reliability Estimation of Individual Multi-target Regression Predictions

Abstract. To estimate the quality of the induced predictive model we
generally use measures of averaged prediction accuracy, such as the relative
mean squared error on test data. Such evaluation fails to provide
local information about reliability of individual predictions, which can
be important in risk-sensitive fields (medicine, finance, industry etc.).
Related work presented several ways for computing individual prediction
reliability estimates for single-target regression models, but has not
considered their use with multi-target regression models that predict a
vector of independent target variables. In this paper we adapt the existing
single-target reliability estimates to multi-target models. In this way
we try to design reliability estimates, which can estimate the prediction
errors without knowing true prediction errors, for multi-target regression
algorithms, as well. We approach this in two ways: by aggregating reliability
estimates for individual target components, and by generalizing
the existing reliability estimates to higher number of dimensions. The
results revealed favorable performance of the reliability estimates that
are based on bagging variance and local cross-validation approaches. The
results are consistent with the related work in single-target reliability
estimates and provide a support for multi-target decision making.
Conclusion
In the paper we proposed several approaches for estimating the reliabilities of
individual multi-target regression predictions. The aggregated variants (AM, l
2
and +) produce a single-valued estimate which is preferable for interpretation
and comparison. The last variant (+) is a direct generalization of the singletarget
estimators from the related work.
Our evaluation showed that best results were achieved using the BAGV and
the LCV reliability estimates regardless the estimate variant. This complies with
the related work on the single-target predictions, where these two estimates also
performed well. Although all of the proposed variants achieve comparable results,
our proposed generalization of existing methods (+) is still the preferred variant
due to its lower computational complexity (as estimates are only calculated once
for all of the target attributes) and the solid theoretical background.
In our further work we intend to additionally evaluate other reliability estimates
in combination with several other regression models. We also plan to test
the adaptation of the proposed methods to multi-target classification.
Reliability estimation of individual predictions offers many advantages especially
when making decisions in highly sensitive environment. Our work provides
an effective support for model-independent multi-target regression.
Reliability Estimation of Individual Multi-target Regression Predictions

Survival analysis and regression models

Abstract: Time-to-event outcomes are common in medical research as they offer more information than simply whether or not an event occurred. To handle these outcomes, as well as censored observations where the event was not observed during follow-up, survival analysis methods should be used. Kaplan-Meier estimation can be used to create graphs of the observed survival curves, while the log-rank test can be used to compare curves from different groups. If it is desired to test continuous predictors or to test multiple covariates at once, survival regression models such as the Cox model or the accelerated failure time model (AFT) should be used. The choice of model should depend on whether or not the assumption of the model (proportional hazards for the Cox model, a parametric distribution of the event times for the AFT model) is met. The goal of this paper is to review basic concepts of survival analysis. Discussions relating the Cox model and the AFT model will be provided. The use and interpretation of the survival methods model are illustrated using an artificially simulated dataset.

SUMMARY AND CONCLUSIONS
This paper reviews some basic concepts of survival analyses including discussions and comparisons between the semiparametric Cox proportional hazards model and the parametric AFT model. The appeal of the AFT model lies in the ease of interpreting the results, because the AFT models the effect of predictors and covariates directly on the survival time instead of through the hazard function. If the assumption of proportional hazards of the Cox model is met, the AFT model can be used with the Weibull distribution, while if proportional hazard is violated, the AFT model can be used with distributions other than Weibull.

It is essential to consider the model assumptions and recognize that if the assumptions are not met, the results may be erroneous or misleading. The AFT model assumes a certain parametric distribution for the failure times and that the effect of the covariates on the failure time is multiplicative. Several different distributions should be considered before choosing one. The Cox model assumes proportional hazards of the predictors over time. Model diagnostic tools and goodness of fit tests should be utilized to assess the model assumptions before statistical inferences are made.

In conclusion, although the Cox proportional hazards model tends to be more popular in the literature, the AFT model should also be considered when planning a survival analysis. It should go without saying that the choice should be driven by the desired outcome or the fit to the data, and never by which gives a significant P value for the predictor of interest. The choice should be dictated only by the research hypothesis and by which assumptions of the model are valid for the data being analyzed.

Survival analysis and regression models

A Machine Learning Approach Using Survival Statistics to Predict Graft Survival in Kidney Transplant Recipients: A Multicenter Cohort Study.

Abstract: Accurate prediction of graft survival after kidney transplant is limited by the complexity and heterogeneity of risk factors influencing allograft survival. In this study, we applied machine learning methods, in combination with survival statistics, to build new prediction models of graft survival that included immunological factors, as well as known recipient and donor variables. Graft survival was estimated from a retrospective analysis of the data from a multicenter cohort of 3,117 kidney transplant recipients. We evaluated the predictive power of ensemble learning algorithms (survival decision tree, bagging, random forest, and ridge and lasso) and compared outcomes to those of conventional models (decision tree and Cox regression). Using a conventional decision tree model, the 3-month serum creatinine level post-transplant (cut-off, 1.65 mg/dl) predicted a graft failure rate of 77.8% (index of concordance, 0.71). Using a survival decision tree model increased the index of concordance to 0.80, with the episode of acute rejection during the first year post-transplant being associated with a 4.27-fold increase in the risk of graft failure. Our study revealed that early acute rejection in the first year is associated with a substantially increased risk of graft failure. Machine learning methods may provide versatile and feasible tools for forecasting graft survival.
A Machine Learning Approach Using Survival Statistics to Predict Graft Survival in Kidney Transplant Recipients: A Multicenter Cohort Study.