Machine Learning Methods to Predict Diabetes Complications

Abstract: One of the areas where Artificial Intelligence is having more impact is machine learning, which develops algorithms able to learn patterns and decision rules from data. Machine learning algorithms have been embedded into data mining pipelines, which can combine them with classical statistical strategies, to extract knowledge from data. Within the EU-funded MOSAIC project, a data mining pipeline has been used to derive a set of predictive models of type 2 diabetes mellitus (T2DM) complications based on electronic health record data of nearly one thousand patients. Such pipeline comprises clinical center profiling, predictive model targeting, predictive model construction and model validation. After having dealt with missing data by means of random forest (RF) and having applied suitable strategies to handle class imbalance, we have used Logistic Regression with stepwise feature selection to predict the onset of retinopathy, neuropathy, or nephropathy, at different time scenarios, at 3, 5, and 7 years from the first visit at the Hospital Center for Diabetes (not from the diagnosis). Considered variables are gender, age, time from diagnosis, body mass index (BMI), glycated hemoglobin (HbA1c), hypertension, and smoking habit. Final models, tailored in accordance with the complications, provided an accuracy up to 0.838. Different variables were selected for each complication and time scenario, leading to specialized models easy to translate to the clinical practice.
Conclusions: This work shows how data mining and computational methods can be effectively adopted in clinical medicine to derive models that use patient-specific information to predict an outcome of interest. Predictive data mining methods may be applied to the construction of decision models for procedures such as prognosis, diagnosis and treatment planning, which—once evaluated and verified—may be embedded within clinical information systems. Developing predictive models for the onset of chronic microvascular complications in patients suffering from T2DM could contribute to evaluating the relation between exposure to individual factors and the risk of onset of a specific complication, to stratifying the patients’ population in a medical center with respect to this risk, and to developing tools for the support of clinical informed decisions in patients’ treatment.
Machine Learning Methods to Predict Diabetes Complications

Predicting Risk of Suicide Attempts Over Time Through Machine Learning

Abstract: Traditional approaches to the prediction of suicide attempts have limited the accuracy and scale of risk detection for these dangerous behaviors. We sought to overcome these limitations by applying machine learning to electronic health records within a large medical database. Participants were 5,167 adult patients with a claim code for self-injury (i.e., ICD-9, E95x); expert review of records determined that 3,250 patients made a suicide attempt (i.e., cases), and 1,917 patients engaged in self-injury that was nonsuicidal, accidental, or nonverifiable (i.e., controls). We developed machine learning algorithms that accurately predicted future suicide attempts (AUC = 0.84, precision = 0.79, recall = 0.95, Brier score = 0.14). Moreover, accuracy improved from 720 days to 7 days before the suicide attempt, and predictor importance shifted across time. These findings represent a step toward accurate and scalable risk detection and provide insight into how suicide attempt risk shifts over time.
Discussion: Accurate and scalable methods of suicide attempt risk detection are an important part of efforts to reduce these behaviors on a large scale. In an effort to contribute to the development of one such method, we applied ML to EHR data. Our major findings included the following: (a) this method produced more accurate prediction of suicide attempts than traditional methods (e.g., ML produced AUCs in the 0.80s, traditional regression in the 0.50s and 0.60s, which also demonstrated wider confidence intervals/greater variance than the ML approach), with notable lead time (up to 2 years) prior to attempts; (b) model performance steadily improved as the suicide attempt become more imminent; (c) model performance was similar for single and repeat attempters; and (d) predictor importance within algorithms shifted over time. Here, we discuss each of these findings in more detail. ML models performed with acceptable accuracy using structured EHR data mapped to known clinical terminologies like CMS-HCC and ATC, Level 5. Recent metaanalyses indicate that traditional suicide risk detection approaches produce near-chance accuracy (Franklin et al., 2017), and a traditional method—multiple logistic regression—produced similarly poor accuracy in the present study. ML to predict suicide attempts obtained greater discriminative accuracy than typically obtained with traditional approaches like logistic regression (i.e., AUC = 0.76; Kessler, Stein, et al., 2016). The present study extends this pioneering work with its use of a larger comparison group of self-injurers without suicidal intent, ability to display a temporally variant risk profile over time, scalability of this approach to any EHR data adhering to accepted clinical data standards, and performance in terms of discriminative accuracy (AUC = 0.84, 95% CI [0.83, 0.85]), precision recall, and calibration (see Table 1). This approach can be readily applied within large medical databases to provide constantly updating risk assessments for millions of patients based on an outcome derived from expert review. Although short-term risk and shifts in risk over time are often noted in clinical lore, risk guidelines, and suicide theories (e.g., O’Connor, 2011; Rudd et  al., 2006; Wenzel & Beck, 2008), few studies have directly investigated these issues. The present study examined risk at several intervals from 720 to 7 days and found that model performance improved as suicide attempts became more imminent. This finding was consistent with hypotheses; however, two aspects of the present study should be considered when interpreting this finding. First, this pattern was confounded by the fact that more data were available naturally over time; predictive modeling efforts at point of care should take advantage of this fact to improve model performance as additional data are collected. Second, due to the limitations of EHR data, we were unable to directly integrate information about potential precipitating events (e.g., job loss) or data not recorded in routine clinical care into the present models. Such information may have further improved short-term prediction of suicide attempts. Future studies should build on the present findings to further elucidate how risk changes as suicide attempts become more imminent.
Predicting Risk of Suicide Attempts Over Time Through Machine Learning

Intelligent Movie Recommender System Using Machine Learning

purpose of suggesting items to view or purchase. The Intelligent movie recommender
system that is proposed combines the concept of Human-Computer
Interaction and Machine Learning. The proposed system is a subclass of
information filtering system that captures facial feature points as well as emotions
of a viewer and suggests them movies accordingly. It recommends movies
best suited for users as per their age and gender and also as per the genres they
prefer to watch. The recommended movie list is created by the cumulative effect
of ratings and reviews given by previous users. A neural network is trained to
detect genres of movies like horror, comedy based on the emotions of the user
watching the trailer. Thus, proposed system is intelligent as well as secure as a
user is verified by comparing his face at the time of login with one stored at the
time of registration. The system is implemented by a fully dynamic interface i.e.
a website that recommends movies to the user [22].
Conclusion and Future Work
Learning method for training data as well as sentiment analysis on reviews. The system
facilitates a web-based user interface i.e. a website that has a user database and has a
Learning model tailored to each user. This interface is dynamic and updates regularly.
Afterward, it tags a movie with genres to which they belong based on expressions of
users watching the trailer. The major problem arises with this technique is when the
viewer gives neutral face expressions while watching a movie. In this case the system is
unable to determine the genre of the movie accurately. The recommendations are
refined with the help of reviews and rating taken by the users who have watched that
A user is allowed to create a single account, and only he can log in from his account
as we verify face every time. The accuracy of the proposed recommendation system
can be improved by adding more analysis factor to user behavior. Location or mood of
the user, special occasions in the year like festivals can also be taken into consideration
to recommend movies. In further updates text summarization on reviews can be
implemented which summaries user comment into single line will comments. Review
Authenticity can be applied to the system to prevent fake and misguiding reviews. Only
genuine reviews would be considered for evaluation of movie rating. In future, the
system can be used with nearby cinema halls to book movie tickets online through our
website [22]. Our approach can be extended to various application domains to
recommend music, books, etc.
Intelligent Movie Recommender System Using Machine Learning

Prediction of oxygen uptake dynamics by machine learning analysis of wearable sensors during activities of daily living

Abstract: Currently, oxygen uptake () is the most precise means of investigating aerobic fitness and level of physical activity; however, can only be directly measured in supervised conditions. With the advancement of new wearable sensor technologies and data processing approaches, it is possible to accurately infer work rate and predict during activities of daily living (ADL). The main objective of this study was to develop and verify the methods required to predict and investigate the  dynamics during ADL. The variables derived from the wearable sensors were used to create a  predictor based on a random forest method. The  temporal dynamics were assessed by the mean normalized gain amplitude (MNG) obtained from frequency domain analysis. The MNG provides a means to assess aerobic fitness. The predicted  during ADL was strongly correlated (r = 0.87, P < 0.001) with the measured  and the prediction bias was 0.2 ml·min−1·kg−1. The MNG calculated based on predicted  was strongly correlated (r = 0.71, P < 0.001) with MNG calculated based on measured  data. This new technology provides an important advance in ambulatory and continuous assessment of aerobic fitness with potential for future applications such as the early detection of deterioration of physical health.


Prediction of oxygen uptake dynamics by machine learning analysis of wearable sensors during activities of daily living

A machine learning approach to integrate big data for precision medicine in acute myeloid leukemia

From Nature

Abstract: Cancers that appear pathologically similar often respond differently to the same drug regimens. Methods to better match patients to drugs are in high demand. We demonstrate a promising approach to identify robust molecular markers for targeted treatment of acute myeloid leukemia (AML) by introducing: data from 30 AML patients including genome-wide gene expression profiles and in vitro sensitivity to 160 chemotherapy drugs, a computational method to identify reliable gene expression markers for drug sensitivity by incorporating multi-omic prior information relevant to each gene’s potential to drive cancer. We show that our method outperforms several state-of-the-art approaches in identifying molecular markers replicated in validation data and predicting drug sensitivity accurately. Finally, we identify SMARCA4 as a marker and driver of sensitivity to topoisomerase II inhibitors, mitoxantrone, and etoposide, in AML by showing that cell lines transduced to have high SMARCA4 expression reveal dramatically increased sensitivity to these agents.

Discussion: Due to the small sample size and the potential confounding factors in the gene expression and the drug sensitivity data, standard methods to discover gene-drug associations usually fail to identify replicable signals. We present a new way to identify robust gene-drug associations by prioritizing genes based on the multi-dimensional information on each gene’s potential to drive cancer. We demonstrate that our method increases the chance that the identified gene-drug associations are replicated in validation data. This leads us to a short list of genes which are all attractive biomarkers for different classes of drugs. Our results—including the expression, drug sensitivity data, and association statistics from patient samples—have been made freely available to academic communities.

Our results suggest that high SMARCA4 expression could be a molecular marker for sensitivity to topoisomerase II inhibitors in AML cells. These results offer a potentially enormous impact to improve patient response. Mitoxantrone is an anthracycline, like daunorubicin or idarubicin, and one of the two component classes of drugs included in nearly all upfront AML treatment regimens. It is also included (the “M”) in the CLAG-M regimen55, a triple-drug component upfront regimen now being studied as GCLAM56. Mitoxantrone and etoposide (also a topoisomerase II inhibitor) are two of the three drugs in the MEC regimen57, used together with cytarabine, as a common regimen for relapsed/refractory AML. Many modern regimens are in clinical trials that add an investigational drug to the MEC backbone, for example, an antibody to CXCR4 (NCT01120457) or an E selectin inhibitor (NCT02306291) in combination, or decitabine priming preceding the MEC regimen58. Identifying a predictor of response to mitoxantrone based on clinically available biospecimens, such as leukemic blast gene expression measured prior to treatment, could potentially increase median survival rates for patients with high expression of SMARCA4 and indicate alternative therapies for patients with low SMARCA4 expression.

The AML patients used in our study were consecutively enrolled on a protocol to obtain laboratory samples for research. They were selected solely based on sufficient leukemia cell numbers. As the patient samples were consecutively obtained and not selected for any specific attribute, we postulated that they were representative of patients seen at a tertiary referral center and that the results would be relevant to a larger, more general clinical population. Moreover, since each of the data sets from which we collected prior information (driver features) contained many more than 30 samples (e.g., TCGA AML data), it would be highly likely that MERGE results would be more generalizable to larger clinical populations than the methods that retrieve results specifically based on the 30 AML samples. In fact, Fig. 2a, b implies higher generalizability of MERGE compared to alternative methods.

While we have genotype information on FLT3 and NPM1 and the cytogenetic risk category for most of the 30 patients, the current version of the MERGE framework did not take these features into account: our main focus sought to build a general framework that could address the high-dimensionality challenge (i.e., the number of samples being much smaller than the number of genes) and make efficient use of expression data to identify robust associations. However, to consolidate our findings, we performed a covariate analysis to confirm that the top-ranked gene-drug associations discovered by MERGE remained significant when the risk group/cytogenetic features were considered in the association analysis. We checked whether the gene-drug associations shown in the heat map in Fig. 6b (highlighted as red or green) were conserved when we added each of the following as an additional covariate to the linear model: (1) cytogenetic risk, (2) FLT3 mutation status, and (3) NPM1 mutation status. In Supplementary Fig. 9, each dot corresponds to a gene-drug pair, and each color to a different covariate. Most of the dots being closer to the diagonal indicates that the associations did not decrease significantly after adding the covariates. Moreover, of 357 dots, only eight were below the horizontal red line; this indicates that 98% of the gene-drug associations MERGE uncovered were still significant (p ≤ 0.05) after modeling the covariate.

A machine learning approach to integrate big data for precision medicine in acute myeloid leukemia

Comparison of Machine Learning Approaches for Prediction of Advanced Liver Fibrosis in Chronic Hepatitis C Patients.

Using machine learning approaches as non-invasive methods have been used recently as an alternative method in staging chronic liver diseases for avoiding the drawbacks of biopsy. This study aims to evaluate different machine learning techniques in prediction of advanced fibrosis by combining the serum bio-markers and clinical information to develop the classification models.

A prospective cohort of 39,567 patients with chronic hepatitis C was divided into two sets – one categorized as mild to moderate fibrosis (F0-F2), and the other categorized as advanced fibrosis (F3-F4) according to METAVIR score. Decision tree, genetic algorithm, particle swarm optimization, and multilinear regression models for advanced fibrosis risk prediction were developed. Receiver operating characteristic curve analysis was performed to evaluate the performance of the proposed models.

Age, platelet count, AST, and albumin were found to be statistically significant to advanced fibrosis. The machine learning algorithms under study were able to predict advanced fibrosis in patients with HCC with AUROC ranging between 0.73 and 0.76 and accuracy between 66.3% and 84.4%.

Machine-learning approaches could be used as alternative methods in prediction of the risk of advanced liver fibrosis due to chronic hepatitis C.

Comparison of Machine Learning Approaches for Prediction of Advanced Liver Fibrosis in Chronic Hepatitis C Patients.

A Machine Learning Approach Using Survival Statistics to Predict Graft Survival in Kidney Transplant Recipients: A Multicenter Cohort Study.

Abstract: Accurate prediction of graft survival after kidney transplant is limited by the complexity and heterogeneity of risk factors influencing allograft survival. In this study, we applied machine learning methods, in combination with survival statistics, to build new prediction models of graft survival that included immunological factors, as well as known recipient and donor variables. Graft survival was estimated from a retrospective analysis of the data from a multicenter cohort of 3,117 kidney transplant recipients. We evaluated the predictive power of ensemble learning algorithms (survival decision tree, bagging, random forest, and ridge and lasso) and compared outcomes to those of conventional models (decision tree and Cox regression). Using a conventional decision tree model, the 3-month serum creatinine level post-transplant (cut-off, 1.65 mg/dl) predicted a graft failure rate of 77.8% (index of concordance, 0.71). Using a survival decision tree model increased the index of concordance to 0.80, with the episode of acute rejection during the first year post-transplant being associated with a 4.27-fold increase in the risk of graft failure. Our study revealed that early acute rejection in the first year is associated with a substantially increased risk of graft failure. Machine learning methods may provide versatile and feasible tools for forecasting graft survival.
A Machine Learning Approach Using Survival Statistics to Predict Graft Survival in Kidney Transplant Recipients: A Multicenter Cohort Study.

Gradient descent revisited via an adaptive online learning rate

One of the most misunderstood concepts and the reason that a lot of cash is spent in Machine Learning as a Service (MLaaS) due a lack of optimization in this parameter that is responsible to control the convergence.

Gradient descent revisited via an adaptive online learning rate

Abstract: Any gradient descent optimization requires to choose a learning rate. With deeper and deeper models, tuning that learning rate can easily become tedious and does not necessarily lead to an ideal convergence. We propose a variation of the gradient descent algorithm in the which the learning rate η is not fixed. Instead, we learn η itself, either by another gradient descent (first-order method), or by Newton’s method (second-order). This way, gradient descent for any machine learning algorithm can be optimized.

Conclusion: In this paper, we have built a new way to learn the learning rate at each step using finite differences on the loss. We have tested it on a variety of convex and non-convex optimization tasks. Based on our results, we believe that our method would be able to adapt a good learning rate at every iteration on convex problems. In the case of non-convex problems, we repeatedly observed faster training in the first few epochs. However, our adaptive model seems more inclined to overfit the training data, even though its test accuracy is always comparable to standard SGD performance, if not slightly better. Hence we believe that in neural network architectures, our model can be used initially for pretraining for a few epochs, and then continue with any other standard optimization technique to lead to faster convergence and be computationally more efficient, and perhaps reach a new highest accuracy on the given problem. Moreover, the learning rate that our algorithm converges to suggests an ideal learning rate for the given training task. One could use our method to tune the learning rate of a standard neural network (using Adam for instance), giving a more precise value than with line-search or random search.

Gradient descent revisited via an adaptive online learning rate

Novel Revenue Development and Forecasting Model using Machine Learning Approaches for Cosmetics Enterprises.

Abstract:In the contemporary information society, constructing an effective sales prediction model is challenging due to the sizeable amount of purchasing information obtained from diverse consumer preferences. Many empirical cases shown in the existing literature argue that the traditional forecasting methods, such as the index of smoothness, moving average, and time series, have lost their dominance of prediction accuracy when they are compared with modern forecasting approaches such as neural network (NN) and support vector machine (SVM) models. To verify these findings, this paper utilizes the Taiwanese cosmetic sales data to examine three forecasting models: i) the back propagation neural network (BPNN), ii) least-square support vector machine (LSSVM), and iii) auto regressive model (AR). The result concludes that the LS-SVM has the smallest mean absolute percent error (MAPE) and largest Pearson correlation coefficient ( R2 ) between model and predicted values.

Novel Revenue Development and Forecasting Model using Machine Learning Approaches for Cosmetics Enterprises.

Ferramenta para Machine Learning – MLJAR

Para quem busca uma alternativa paga para Machine Learning em ambientes fora da própria infraestrutura o MLJAR pode ser a resposta.


MLJAR is a human-first platform for machine learning.
It provides a service for prototyping, development and deploying pattern recognition algorithms.
It makes algorithm search and tuning painless!


You pay for computational time used for models training, predictions and data analysis. 1 credit is 1 computation hour on machine with 8 CPU and 15GB RAM. Computational time is aggregated per second basis.

Ferramenta para Machine Learning – MLJAR

Redes Neurais Coevolucionárias aplicadas na identificação do Mal de Parkinson

Mais um caso de aplicação de Deep Learning em questões médicas.

Convolutional Neural Networks Applied for Parkinson’s Disease Identification

Abstract: Parkinson’s Disease (PD) is a chronic and progressive illness that affects hundreds of thousands of people worldwide. Although it is quite easy to identify someone affected by PD when the illness shows itself (e.g. tremors, slowness of movement and freezing-of-gait), most works have focused on studying the working mechanism of the disease in its very early stages. In such cases, drugs can be administered in order to increase the quality of life of the patients. Since the beginning, it is well-known that PD patients feature the micrography, which is related to muscle rigidity and tremors. As such, most exams to detect Parkinson’s Disease make use of handwritten assessment tools, where the individual is asked to perform some predefined tasks, such as drawing spirals and meanders on a template paper. Later, an expert analyses the drawings in order to classify the progressive of the disease. In this work, we are interested into aiding physicians in such task by means of machine learning techniques, which can learn proper information from digitized versions of the exams, and them recommending a probability of a given individual being affected by PD depending on its handwritten skills. Particularly, we are interested in deep learning techniques (i.e. Convolutional Neural Networks) due to their ability into learning features without human interaction. Additionally, we propose to fine-tune hyper-arameters of such techniques by means of meta-heuristic-based techniques, such as Bat Algorithm, Firefly Algorithm and Particle Swarm Optimization.

Redes Neurais Coevolucionárias aplicadas na identificação do Mal de Parkinson

DeepCancer: Detectando câncer através de expressões genéticas via Deep Learning

Este paper trás uma implementação de Deep Learning que se confirmada pode ser um grande avanço na indústria de diagnósticos para os serviços de saúde, dado que através de aprendizado algorítmico podem ser identificados diversos tipos de genes cancerígenos e isso pode conter duas externalidades positivas que são 1) o barateamento e a rapidez no diagnóstico, e 2) reformulação total da estratégia de combate e prevenção de doenças.

DeepCancer: Detecting Cancer through Gene Expressions via Deep Generative Learning

Abstract: Transcriptional profiling on microarrays to obtain gene expressions has been used to facilitate cancer diagnosis. We propose a deep generative machine learning architecture (called DeepCancer) that learn features from unlabeled microarray data. These models have been used in conjunction with conventional classifiers that perform classification of the tissue samples as either being cancerous or non-cancerous. The proposed model has been tested on two different clinical datasets. The evaluation demonstrates that DeepCancer model achieves a very high precision score, while significantly controlling the false positive and false negative scores.

Conclusions: We presented a deep generative learning model DeepCancer for detection and classification of inflammatory breast cancer and prostate cancer samples. The features are learned through an adversarial feature learning process and then sent as input to a conventional classifier specific to the objective of interest. After modifications through specified hyperparameters, the model performs quite comparatively well on the task tested on two different datasets. The proposed model utilized cDNA microarray gene expressions to gauge its efficacy. Based on deep generative learning, the tuned discriminator and generator models, D and G respectively, learned to differentiate between the gene signatures without any intermediate manual feature handpicking, indicating that much bigger datasets can be experimented on the proposed model more seamlessly. The DeepCloud model will be a vital aid to the medical imaging community and, ultimately, reduce inflammatory breast cancer and prostate cancer mortality.

DeepCancer: Detectando câncer através de expressões genéticas via Deep Learning

Hardware para Machine Learning: Desafios e oportunidades

Um ótimo paper de como o hardware vai exercer função crucial em alguns anos em relação à Core Machine Learning, em especial em sistemas embarcados.

Hardware for Machine Learning: Challenges and Opportunities

Abstract—Machine learning plays a critical role in extracting meaningful information out of the zetabytes of sensor data collected every day. For some applications, the goal is to analyze and understand the data to identify trends (e.g., surveillance, portable/wearable electronics); in other applications, the goal is to take immediate action based the data (e.g., robotics/drones, self-driving cars, smart Internet of Things). For many of these applications, local embedded processing near the sensor is preferred over the cloud due to privacy or latency concerns, or limitations in the communication bandwidth. However, at the sensor there are often stringent constraints on energy consumption and cost in addition to throughput and accuracy requirements. Furthermore, flexibility is often required such that the processing can be adapted for different applications or environments (e.g., update the weights and model in the classifier). In many applications, machine learning often involves transforming the input data into a higher dimensional space, which, along with programmable weights, increases data movement and consequently energy consumption. In this paper, we will discuss how these challenges can be addressed at various levels of hardware design ranging from architecture, hardware-friendly algorithms, mixed-signal circuits, and advanced technologies (including memories and sensors).

Conclusions: Machine learning is an important area of research with many promising applications and opportunities for innovation at various levels of hardware design. During the design process, it is important to balance the accuracy, energy, throughput and cost requirements. Since data movement dominates energy consumption, the primary focus of recent research has been to reduce the data movement while maintaining performance accuracy, throughput and cost. This means selecting architectures with favorable memory hierarchies like a spatial array, and developing dataflows that increase data reuse at the low-cost levels of the memory hierarchy. With joint design of algorithm and hardware, reduced bitwidth precision, increased sparsity and compression are used to minimize the data movement requirements. With mixed-signal circuit design and advanced technologies, computation is moved closer to the source by embedding computation near or within the sensor and the memories. One should also consider the interactions between these different levels. For instance, reducing the bitwidth through hardware-friendly algorithm design enables reduced precision processing with mixed-signal circuits and non-volatile memory. Reducing the cost of memory access with advanced technologies could result in more energy-efficient dataflows.

Hardware para Machine Learning: Desafios e oportunidades

Churn-at-Risk: Aplicação de Survival Analysis no controle de churn de assinaturas em Telecom


Um dos assuntos mais recorrentes em qualquer tipo de serviço de assinatura é como reduzir o Churn (saída de clientes), dado que conquistar novos clientes é bem mais difícil (e caro) do que manter os antigos.

Cerca de 70% das empresas sabem que é mais barato manter um cliente do que ter que ir atrás de um novo.

Fazendo uma analogia simples, o lucro dos serviços de assinatura são como uma espécie de sangue na corrente sanguínea de uma empresa e uma interrupção de qualquer natureza prejudica todo o negócio, dado que esse é um modelo de receita que se baseia na recorrência de tarifação e não no desenvolvimento, ou mesmo venda de outros produtos.

Em modelos de negócios baseados no volume de pessoas que estão dispostas a terem uma cobrança recorrente o negócio fica bem mais complicado, dado que diferentemente de produtos que tem uma elasticidade maior o fluxo de receita é extremamente sujeito aos sabores do mercado e dos clientes.

Dentro desse cenário, para todas as empresas que tem o seu fluxo de receita baseado nesse tipo de business, saber quando um cliente entrará em uma situação de saída através do cancelamento do serviço (Churn) é fundamental para criar mecanismos de retenção mais efetivos, ou mesmo criação de réguas de contato com os clientes para evitar ou minimizar a chance de um cliente sair da base de dados.

Sendo assim, qualquer mecanismo ou mesmo esforço para minimizar esse efeito é de grande valia. Nos baseamos na teoria estatística buscar respostas para as seguintes perguntas:

  • Como diminuir o Churn?
  • Como identificar um potencial cliente que irá entrar em uma situação de Churn? Quais estratégias seguir para minimizar esse Churn?
  • Quais réguas de comunicação com os clientes devemos ter para entender os motivos que estão fazendo um assinante cancelar o serviço e quais são as estratégias de customer winback possíveis nesse cenário?

E pra responder essa pergunta, fomos buscar as respostas na análise de sobrevivência dado que essa área da estatística é uma das que lidam melhor em termos de probabilidade de tempo de vida com dados censurados, seja de materiais (e.g. tempo de falha de algum sistema mecânico) ou no tempo de vida de pessoas propriamente ditas (e.g. dado uma determinada posologia qual é a estimativa de um paciente sobreviver a um câncer), e no nosso caso quanto tempo de vida um assinante tem até deixar cancelar a sua assinatura.

Análise de Sobrevivência

A análise de sobreviência é uma técnica estatístisca que foi desenvolvida na medicina e tem como principal finalidade estimar o tempo de sobrevivência ou tempo de morte de um determinado paciente dentro de um horizonte do tempo.

O estimador de Kaplan-Meier (1958) utiliza uma função de sobrevivência que leva em consideração uma divisão entre o número de observações que não falharam no tempo t pelo número total de observações no estudo em que cada intervalo de tempo tem-se o número de falhas/mortes/churn distintos bem como é calculado o risco de acordo com o número de indivíduos restantes no tempo subsequente.

Já o estimador Nelson-Aalen (1978) é um estimador que tem as mesmas características do Kaplan-Meier, com a diferença que esse estimador trabalha com uma função de sobrevivência que é a cumulative hazard rate function.

Os elementos fundamentais para caracterização de um estudo que envolve análise de sobrevivência são, o (a) tempo inicial, (b)escala de medida do intervalo de tempo e (c) se o evento de churn ocorreu.

Os principais artigos são de Aalen (1978), Kaplan-Meier (1958) e Cox (1972).

Esse post não tem como principal objetivo dar algum tipo de introdução sobre survival analysis, dado que tem muitas referências na internet sobre o assunto e não há nada a ser acrescentado nesse sentido por este pobre blogueiro.

Assim como a análise de cohort, a análise de sobrevivência tem como principal característica ser um estudo de natureza longitudinal, isto é, os seus resultados tem uma característica de temporalidade seja em aspectos de retrospecção, quanto em termos de perspectivas, isso é, tem uma resposta tipicamente temporal para um determinado evento de interesse.

O que vamos usar como forma de comparação amostral é o comportamento longitudinal, de acordo com determinadas características de amostragens diferentes ao longo do tempo, e os fatores que influenciam no churn.

Devido a questões óbvias de NDA não vamos postar aqui características que possam indicar qualquer estratégia de negócios ou mesmo caracterização de alguma informação de qualquer natureza.

Podemos dizer que a análise de sobrevivência aplicada em um caso de telecom, pode ajudar ter uma estimativa em forma de probabilidade em relação ao tempo em que uma assinatura vai durar até o evento de churn (cancelamento) e dessa forma elaborar estratégias para evitar esse evento, dado que adquirir um novo cliente é mais caro do que manter um novo e entra totalmente dentro de uma estratégia de Customer Winback (Nota: Esse livro Customer Winback do Jill Griffin e do Michael Lowenstein é obrigatório para todos que trabalham com serviços de assinaturas ou negócios que dependam de uma recorrência muito grande como comércio).

No nosso caso o tempo de falha ou tempo de morte, como estamos falando de serviços de assinaturas, o nosso evento de interesse seria o churn, ou cancelamento da assinatura. Em outras palavras teríamos algo do tipo Time-to-Churn ou um Churn-at-Risk. Guardem esse termo.


Usamos dados de dois produtos antigos em que os dados foram anonimizados e aplicados um hash de embaralhamento uniforme (que obedece uma distribuição específica) nos atributos (por questões de privacidade) que são:

  • id = Identificador do registro;
  • product = produto;
  • channel = canal no qual o cliente entrou na base de dados;
  • free_user = flag que indica se o cliente entrou na base em gratuidade ou não;
  • user_plan = se o usuário é pré-pago ou pós-pago;
  • t = tempo que o assinante está na base de dados; e
  • c = informa se o evento de interesse (no caso o churn (cancelamento da assinatura) ocorreu ou não.

Eliminamos o efeito de censura à esquerda retirando os casos de reativações, dado que queríamos entender a jornada do assinante como um todo sem nenhum tipo de viés relativo a questões de customer winback. Em relação à censura à direita temos alguns casos bem específicos que já se passaram alguns meses desde que essa base de dados foi extraída.

Um aspecto técnico importante a ser considerado é que esses dois produtos estão em categorias de comparabilidade, dado que sem isso nenhum tipo de caractericação seria nula.

No fim dessa implementação teremos uma tabela de vida em relação a esses produtos.


Primeiramente vamos importar as bibliotecas: Pandas (para manipulação de dados), matplotlib (para a geração de gráficos), e lifelines para aplicação da análise de sobrevivência:

%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import lifelines

Após realizar a importação das bibliotecas, vamos ajustar o tamanho das imagens para uma melhor visualização:

%pylab inline
pylab.rcParams['figure.figsize'] = (14, 9)

Vamos realizar o upload da nossa base de dados criando um objecto chamado df e usando a classe read_csv do Pandas:

df = pd.read_csv('')

Vamos checar a nossa base de dados:

id product channel free_user user_plan t c
0 3315 B HH 1 0 22 0
1 2372 A FF 1 1 16 0
2 1098 B HH 1 1 22 0
3 2758 B HH 1 1 4 1
4 2377 A FF 1 1 29 0

Então como podemos ver temos as 7 variáveis na nossa base de dados.

Na sequência vamos importar a biblioteca do Lifelines, em especial o estimador de KaplanMaier:

from lifelines import KaplanMeierFitter

kmf = KaplanMeierFitter()

Após realizar a importação da classe relativa ao estimador de Kaplan Meier no objeto kmf, vamos atribuir as nossas variáveis de tempo (T) e evento de interesse (C)

T = df["t"]

C = df["c"]

O que foi feito anteriormente é que buscamos no dataframe df o array t e atribuímos no objeto T, e buscamos o array da coluna c no dataframe e atribuímos no objeto C.

Agora vamos chamar o método fit usando esses dois objetos no snippet abaixo:, event_observed=C )
<lifelines.KaplanMeierFitter: fitted with 10000 observations, 6000 censored>
Objeto ajustado, vamos agora ver o gráfico relativo a esse objeto usando o estimador de Kaplan Meier.
plt.title('Survival function of Service Valued Add Products');
plt.ylabel('Probability of Living (%)')
plt.xlabel('Lifespan of the subscription (in days)')
<matplotlib.text.Text at 0x101b24a90>

Como podemos ver no gráfico, temos algumas observações pertinentes, quando tratamos a probabilidade de sobrevivência desses dois produtos no agregado que são:

  • Logo no primeiro dia há uma redução substancial do tempo de sobrevivência da assinatura em aproximadamente 22%;
  • Há um decaimento quase que linear depois do quinto dia de assinatura; e
  • Depois do dia número 30, a probabilidade de sobrevivência de uma assinatura é de aproximadamente de 50%. Em outras palavras: depois de 30 dias, metade dos novos assinantes já estarão fora da base de assinantes.

No entanto, vamos plotar a mesma função de sobrevivência considerando os intervalos de confiança estatística.

plt.title('Survival function of Service Valued Add Products - Confidence Interval in 85-95%');
plt.ylabel('Probability of Living (%)')
plt.xlabel('Lifespan of the subscription')
<matplotlib.text.Text at 0x10ad8e0f0>

Contudo nesse modelo inicial temos duas limitações claras que são:

  • Os dados no agregado não dizem muito em relação à dinâmicas que podem estar na especificidade de alguns atributos/dimensões;
  • Não são exploradas as dimensões (ou quebras) de acordo com os atributos que vieram na base de dados; e
  • Não há a divisão por produto.

Para isso, vamos começar a entrar no detalhe em relação a cada uma das dimensões e ver o que cada uma tem de influência em relação à função de sobrevivência.

Vamos começar realizando a quebra pela dimensão que determina se o cliente entrou via gratuidade ou não (free_user).

ax = plt.subplot(111)

free = (df["free_user"] == 1)[free], event_observed=C[free], label="Free Users")
kmf.plot(ax=ax, ci_force_lines=True)[~free], event_observed=C[~free], label="Non-Free Users")
kmf.plot(ax=ax, ci_force_lines=True)
plt.title("Lifespans of different subscription types");
plt.ylabel('Probability of Living (%)')
<matplotlib.text.Text at 0x10ad8e908>

Este gráfico apresenta algumas informações importantes para os primeiros insights em relação a cada uma das curvas de sobrevivência em relação ao tipo de gratuidade oferecida como fator de influência para o churn que são:

  • Os assinantes que entram como não gratuitos (i.e. não tem nenhum tipo de gratuidade inicial) após o 15o dia apresenta um decaimento brutal de mais de 40% da chance de sobrevivência (tratando-se do intervalo de confiança);
  • Após o 15o dia os assinantes que não desfrutam de gratuidade tem a sua curva de sobrevivência em uma relativa estabilidade em torno de 60% na probabilidade de sobrevivência até o período censurado;
  • Ainda nos usuários sem gratuidade, dado o grau de variabilidade do intervalo de confiança podemos tirar como conclusão que muitos cancelamentos estão ocorrendo de forma muito acelerada, o que deve ser investigado com mais calma pelo time de produtos; e
  • Já os usuários que entram via gratuidade (i.e. ganham alguns dias grátis antes de serem tarifados) apresenta um nível de decaimento do nível de sobrevivência maior seja no período inicial, quando ao longo do tempo, contudo uma estabilidade é encontrada ao longo de toda a série sem maiores sobressaltos.

Dado essa análise inicial das curvas de sobrevivência, vamos avaliar agora as probabilidades de sobrevivência de acordo com o produto.

ax = plt.subplot(111)

product = (df["product"] == "A")[product], event_observed=C[product], label="Product A")
kmf.plot(ax=ax, ci_force_lines=True)[~product], event_observed=C[~product], label="Product B")
kmf.plot(ax=ax, ci_force_lines=True)

plt.title("Survival Curves of different Products");
plt.ylabel('Probability of Living (%)')
<matplotlib.text.Text at 0x10aeaabe0>

Este gráfico apresenta a primeira distinção entre os dois produtos de uma forma mais clara.

Mesmo com os intervalos de confiança com uma variação de 5%, podemos ver que o produto A (linha azul) tem uma maior probabilidade de sobrevivência com uma diferença percentual de mais de 15%; diferença essa amplificada depois do vigésimo dia.

Em outras palavras: Dado um determinada safra de usuários, caso o usuário entre no produto A o mesmo tem uma probabilidade de retenção de cerca de 15% em relação a um usuário que por ventura entre no produto B, ou o produto A apresenta uma cauda de retenção superior ao produto B.

Empiricamente é sabido que um dos principais fatores de influência de produtos SVA são os canais de mídia os quais esses produtos são oferecidos.

O canal de mídia é o termômetro em que podemos saber se estamos oferencendo os nossos produtos para o público alvo correto.

No entanto para um melhor entendimento, vamos analisar os canais nos quais as assinaturas são originadas.

A priori vamos normalizar a variável channel para realizar a segmentação dos canais de acordo com o conjunto de dados.

df['channel'] = df['channel'].astype('category');
channels = df['channel'].unique()

Após normalização e transformação da variável para o tipo categórico, vamos ver como está o array.

[HH, FF, CC, AA, GG, ..., BB, EE, DD, JJ, ZZ]
Length: 11
Categories (11, object): [HH, FF, CC, AA, ..., EE, DD, JJ, ZZ]

Aqui temos a representação de 11 canais de mídia os quais os clientes entraram no serviço.

Com esses canais, vamos identificar a probabilidade de sobrevivência de acordo com o canal.

for i,channel_type in enumerate(channels):
    ax = plt.subplot(3,4,i+1)
    ix = df['channel'] == channel_type T[ix], C[ix], label=channel_type )
    kmf.plot(ax=ax, legend=True)
    if i==0:
        plt.ylabel('Probability of Survival by Channel (%)')
Fazendo uma análise sobre cada um desses gráficos temos algumas considerações sobre cada um dos canais:
  • HH, DD: Uma alta taxa de mortalidade (churn) logo antes dos primeiros 5 dias, o que indica uma característica de efemeridade ou atratividade no produto para o público desse canal de mídia.
  • FF: Apresenta menos de 10% de taxa de mortalidade nos primeiros 20 dias, e tem um padrão muito particular depois do 25o dia em que praticamente não tem uma mortalidade tão alta. Contém um intervalo de confiança com uma oscilação muito forte.
  • CC: Junto com o HH apesar de ter uma taxa de mortalidade alta antes do 10o dia, apresenta um grau de previsibilidade muito bom, o que pode ser utilizado em estratégias de incentivos de mídia que tenham que ter uma segurança maior em termos de retenção a médio prazo.
  • GG, BB: Apresentam uma boa taxa de sobrevivência no inicio do período, contudo possuem oscilações severas em seus respectivos intervalos de confiança. Essa variável deve ser considerada no momento de elaboração de uma estratégia de investimento nesses canais.
  • JJ: Se houvesse uma definição de incerteza em termos de sobrevivência, esse canal seria o seu melhor representante. Com os seus intervalos de confiança oscilando em mais de 40% em relação ao limite inferior e superior, esse canal de mídia mostra-se extremamente arriscado para os investimentos, dado que não há nenhum tipo de regularidade/previsibilidade de acordo com esses dados.
  • II: Apesar de ter um bom grau de previsibilidade em relação à taxa de sobrevivência nos primeiros 10 dias, após esse período tem uma curva de hazard muito severa, o que indica que esse tipo de canal pode ser usado em uma estratégia de curto prazo.
  • AA, EE, ZZ: Por haver alguma forma de censura nos dados, necessitam de mais análise nesse primeiro momento. (Entrar no detalhe dos dados e ver se é censura à direita ou algum tipo de truncamento).

Agora que já sabemos um pouco da dinâmica de cada canal, vamos criar uma tabela de vida para esses dados.

A tabela de vida nada mais é do que uma representação da função de sobrevivência de forma tabular em relação aos dias de sobrevivência.

Para isso vamos usar a biblioteca utils do lifelines para chegarmos nesse valor.

from lifelines.utils import survival_table_from_events

Biblioteca importada, vamos usar agora as nossas variáveis T e C novamente para realizar o ajuste da tabela de vida.

lifetable = survival_table_from_events(T, C)

Tabela importada, vamos dar uma olhada no conjunto de dados.

print (lifetable)
          removed  observed  censored  entrance  at_risk
0            2250      2247         3     10000    10000
1             676       531       145         0     7750
2             482       337       145         0     7074
3             185       129        56         0     6592
4             232        94       138         0     6407
5             299        85       214         0     6175
6             191        73       118         0     5876
7             127        76        51         0     5685
8             211        75       136         0     5558
9            2924        21      2903         0     5347
10            121        27        94         0     2423
11             46        27        19         0     2302
12             78        26        52         0     2256
13            111        16        95         0     2178
14             55        35        20         0     2067
15            107        29        78         0     2012
16            286        30       256         0     1905
17            156        23       133         0     1619
18            108        18        90         0     1463
19             49        11        38         0     1355
20             50        17        33         0     1306
21             61        13        48         0     1256
22            236        23       213         0     1195
23             99         6        93         0      959
24            168         9       159         0      860
25            171         7       164         0      692
26             58         6        52         0      521
27             77         2        75         0      463
28             29         6        23         0      386
29            105         1       104         0      357
30             69         0        69         0      252
31            183         0       183         0      183

Diferentemente do R que possuí a tabela de vida com a porcentagem relativa à probabilidade de sobrevivência, nesse caso vamos ter que fazer um pequeno ajuste para obter a porcentagem de acordo com o atributo entrance e at_risk.

O ajuste se dará da seguinte forma:

survivaltable = lifetable.at_risk/np.amax(lifetable.entrance)

Ajustes efetuados, vamos ver como está a nossa tabela de vida.

0     1.0000
1     0.7750
2     0.7074
3     0.6592
4     0.6407
5     0.6175
6     0.5876
7     0.5685
8     0.5558
9     0.5347
10    0.2423
11    0.2302
12    0.2256
13    0.2178
14    0.2067
15    0.2012
16    0.1905
17    0.1619
18    0.1463
19    0.1355
20    0.1306
21    0.1256
22    0.1195
23    0.0959
24    0.0860
25    0.0692
26    0.0521
27    0.0463
28    0.0386
29    0.0357
30    0.0252
31    0.0183
Name: at_risk, dtype: float64

Vamos transformar a nossa tabela de vida em um objeto do pandas para melhor manipulação do conjunto de dados.

survtable = pd.DataFrame(survivaltable)

Para casos de atualização de Churn-at-Risk podemos definir uma função que já terá a tabela de vida e poderá fazer a atribuição da probabilidade de sobrevivência de acordo com os dias de sobrevivência.

Para isso vamos fazer uma função simples usando o próprio python.

def survival_probability( int ):
   print ("The probability of Survival after", int, "days is", survtable["at_risk"].iloc[int]*100, "%") 

Nesse caso vamos ver a chance de sobrevivência usando o nosso modelo Kaplan-Meier já ajustado para uma assinatura que tenha 22 dias de vida.

In [22]:
The probability of Survival after 22 days is 11.95 %

Ou seja, essa assinatura tem apenas 11.95% de probabilidade de estar ativa, o que significa que em algum momento muito próximo ela pode vir a ser cancelada.


Como podemos ver acima, usando análise de sobrevivência podemos tirar insights interessantes em relação ao nosso conjunto de dados, em especial para descobrirmos a duração das assinaturas em nossa base de dados, e estimar um tempo até o evento de churn.

Os dados utilizados refletem o comportamento de dois produtos reais, porém, que foram anonimizados por questões óbvias de NDA. Contudo nada impede a utilização e a adaptação desse código para outros experimentos. Um ponto importante em relação a essa base de dados é que como pode ser observado temos uma censura à direita muito acentuada o que limita um pouco a visão dos dados a longo prazo, principalmente se houver algum tipo de cauda longa no evento de churn.

Como coloquei no São Paulo Big Data Meetup de Março há uma série de arquiteturas que podem ser combinadas com esse tipo de análise, em especial métodos de Deep Learning que podem ser um endpoint de um pipeline de predição.

Espero que tenham gostado e quaisquer dúvidas mandem uma mensagem para flavioclesio at

PS: Agradecimentos especiais aos meus colegas e revisores Eiti Kimura, Gabriel Franco e Fernanda Eleuterio.

Churn-at-Risk: Aplicação de Survival Analysis no controle de churn de assinaturas em Telecom

Modelagem de tópicos criminais usando Machine Learning

Com o aumento da violência no nosso país (em que temos mais de 60 mil assassinatos por ano) é de fundamental importância que todas as secretarias e demais departamentos burocráticos do estado estejam um passo a frente do crime e não só isso: façam o mapeamento correto das ocorrências para que medidas preventivas  (e.g. patrulhamento, inteligência, et cetera) tenham o máximo de assertividade possível.

E não só isso: com um mapeamento correto, além de questões de policiamento que podem ser corrigidas, mas também questões de tomada de decisão para criação/alteração da legislação podem ser tomadas em bases mais sólidas descartando todo o proselitismo que é feito sobre essa questão.

Crime Topic Modeling – Da Kuang, P. Jeffrey Brantingham, Andrea L. Bertozzi

Abstract: The classification of crime into discrete categories entails a massive loss of information. Crimes emerge out of a complex mix of behaviors and situations, yet most of these details cannot be captured by singular crime type labels. This information loss impacts our ability to not only understand the causes of crime, but also how to develop optimal crime prevention strategies. We apply machine learning methods to short narrative text descriptions accompanying crime records with the goal of discovering ecologically more meaningful latent crime classes. We term these latent classes “crime topics” in reference to text-based topic modeling methods that produce them. We use topic distributions to measure clustering among formally recognized crime types. Crime topics replicate broad distinctions between violent and property crime, but also reveal nuances linked to target characteristics, situational conditions and the tools and methods of attack. Formal crime types are not discrete in topic space. Rather, crime types are distributed across a range of crime topics. Similarly, individual crime topics are distributed across a range of formal crime types. Key ecological groups include identity theft, shoplifting, burglary and theft, car crimes and vandalism, criminal threats and confidence crimes, and violent crimes. Crime topic modeling positions behavioral situations as the focal unit of analysis for crime events. Though unlikely to replace formal legal crime classifications, crime topics provide a unique window into the heterogeneous causal processes underlying crime. We discuss whether automated procedures could be used to cross-check the quality of official crime classifications.

Objectives The classification of crime into discrete categories entails a massive loss of information. Crimes emerge out of a complex mix of behaviors and situations, yet most of these details cannot be captured by singular crime type labels. This information loss impacts our ability to not only understand the causes of crime, but also how to develop optimal crime prevention strategies.
Methods We apply machine learning methods to short narrative text descriptions
accompanying crime records with the goal of discovering ecologically more meaningful latent crime classes. We term these latent classes ‘crime topics’ in reference to text-based topic modeling methods that produce them. We use topic distributions to measure clustering among formally recognized crime types.
Results Crime topics replicate broad distinctions between violent and property crime, but also reveal nuances linked to target characteristics, situational conditions and the tools and methods of attack. Formal crime types are not discrete in topic space. Rather, crime types are distributed across a range of crime topics. Similarly, individual crime topics are distributed across a range of formal crime types. Key ecological groups include identity theft, shoplifting, burglary and theft, car crimes and vandalism, criminal threats and confidence crimes, and violent crimes.
Conclusions Crime topic modeling positions behavioral situations as the focal unit of analysis for crime events. Though unlikely to replace formal legal crime classifications, crime topics provide a unique window into the heterogeneous causal processes underlying crime. 


Modelagem de tópicos criminais usando Machine Learning

Previsão de retornos em Hedge Funds e seleção de fundos: Uma abordagem com Machine Learning

Apesar dos bons resultados o maior diferencial desse artigo é a metodologia em que os autores dividiram os fundos em quatro categorias que são equity, event-driven, macro, e relative value e realizaram análises do tipo cross-sectional para mensuração de performance. Sem dúvidas um bom artigo para quem queira trabalhar com esse tipo de fundo, ou mesmo ter o próprio fundo particular usando Machine Learning.

Hedge Fund Return Prediction and Fund Selection: A Machine-Learning Approach – Jiaqi Chen, Wenbo Wu, and Michael L. Tindall – Federal Reserve Bank of Dallas 

Abstract: A machine-learning approach is employed to forecast hedge fund returns and perform individual hedge fund selection within major hedge fund style categories. Hedge fund selection is treated as a cross-sectional supervised learning process based on direct forecasts of future returns. The inputs to the machine-learning models are observed hedge fund characteristics. Various learning processes including the lasso, random forest methods, gradient boosting methods, and deep neural networks are applied to predict fund performance. They all outperform the corresponding style index as well as a benchmark model, which forecasts hedge fund returns using macroeconomic variables. The best results are obtained from machine-learning processes that utilize model averaging, model shrinkage, and nonlinear interactions among the factors.

Conclusions: We propose a supervised machine-learning approach to forecast hedge fund returns and select hedge funds quantitatively. The framework is based on cross-sectional forecasts of hedge fund returns utilizing a set of 17 factors. The approach allows the investor to identify funds that are likely to perform well and to construct the corresponding portfolios. We find that our method is applicable across hedge fund style categories. Focusing on factors constructed from characteristics idiosyncratic to individual funds, our models offer distinctive perspectives when compared to models that are driven by macroeconomic variables. Retrospectively, when benchmarked against a traditional factor model, our machine-learning approach generates portfolios with large alphas. The relatively low explanatory power of the regressions indicates that most of the performance of the algorithm-generated portfolios is due to success in identifying funds likely to deliver good performance. Our approach is flexible enough to incorporate new developments both in risk-factor research field and in the machine-learning field.


Previsão de retornos em Hedge Funds e seleção de fundos: Uma abordagem com Machine Learning

Learning Pulse: Uma abordagem de Machine Learning para previsão de performance em regimes auto-regulados de aprendizado usando dados multimodais

Todo mundo sabe que educação é um assunto muito atual nos dias de hoje, e o principal: como usar os smartphones para que os mesmos saiam de vilões da atenção para uma ferramenta de monitoramento e acompanhamento do desempenho acadêmico?

Esse artigo trás uma resposta interessante sobre esse tema.

Learning Pulse: a machine learning approach for predicting performance in self-regulated learning using multimodal data


Abstract: Learning Pulse explores whether using a machine learning approach on multimodal data such as heart rate, step count, weather condition and learning activity can be used to predict learning performance in self-regulated learning settings. An experiment was carried out lasting eight weeks involving PhD students as participants, each of them wearing a Fitbit HR wristband and having their application on their computer recorded during their learning and working activities throughout the day. A software infrastructure for collecting multimodal learning experiences was implemented. As part of this infrastructure a Data Processing Application was developed to pre-process, analyse and generate predictions to provide feedback to the users about their learning performance. Data from different sources were stored using the xAPI standard into a cloud-based Learning Record Store. The participants of the experiment were asked to rate their learning experience through an Activity Rating Tool indicating their perceived level of productivity, stress, challenge and abilities. These self-reported performance indicators were used as markers to train a Linear Mixed Effect Model to generate learner-specific predictions of the learning performance. We discuss the advantages and the limitations of the used approach, highlighting further development points.


Conclusions: This paper described Learning Pulse, an exploratory study whose aim was to use predictive modelling to generate timely predictions about learners’ performance during self-regulated learning by collecting multimodal data about their body, activity and context. Although the prediction accuracy with the data sources and experimental setup chosen in Learning Pulse led to modest results, all the research questions have been answered positively and have lead towards new insights on the storing, modelling and processing multimodal data. We raise some of the unsolved challenges that can be considered a research agenda for future work in the field of Predictive Learning Analytics with “beyond-LMS” multimodal data. The ones identified are: 1) the number of self-reports vs unobtrusiveness; 2) the homogeneity of the learning task specifications; 3) the approach to model random effects; 4) alternative machine learning techniques. There is a clear trade-off between the frequency of selfreports and the seamlessness of the data collection. The number of self-reports cannot be increased without worsening the quality of the learning process observed. On the other side, having a high number of labels is essential to make supervised machine learning work correctly. In addition, a more robust way of modelling random effects must be found. The found solution to group them manually into categories is not scalable. Learning is inevitably made up by random effects, i.e. by voluntary and unpredictable actions taken by the learners. The sequence of such events is also important and must be taken into account with appropriate models. As an alternative to supervised learning techniques, also unsupervised methods can be investigated, as with those methods fine graining the data into small intervals does not generate problems with matching the corresponding labels also the amount of labels is no longer needed. Regarding the experimental setup, it would be best to have a set of coherent learning tasks that the participants of the experiment need to accomplish, contrarily to as it was done in Learning Pulse, where the participants had completely different tasks, topics and working rhythms. It would be also useful to have a baseline group of participants, which do not have access to the visualisations while another group does have access; that would allow to see the difference of performance, whether there is an actual increase. To conclude, Learning Pulse set the first steps towards a new and exciting research direction, the design and the development of predictive learning analytics systems exploiting multimodal data about the learners, their contexts and their activities with the aim to predict their current learning state and thus being able to generate timely feedback for learning support.



Learning Pulse: Uma abordagem de Machine Learning para previsão de performance em regimes auto-regulados de aprendizado usando dados multimodais

Comparação entre um modelo de Machine Learning e EuroSCOREII na previsão de mortalidade após cirurgia cardíaca eletiva

Mais um estudo colocando  alguns algoritmos de Machine Learning contra métodos tradicionais de scoring, e levando a melhor.

A Comparison of a Machine Learning Model with EuroSCORE II in Predicting Mortality after Elective Cardiac Surgery: A Decision Curve Analysis

Abstract: The benefits of cardiac surgery are sometimes difficult to predict and the decision to operate on a given individual is complex. Machine Learning and Decision Curve Analysis (DCA) are recent methods developed to create and evaluate prediction models.

Methods and finding: We conducted a retrospective cohort study using a prospective collected database from December 2005 to December 2012, from a cardiac surgical center at University Hospital. The different models of prediction of mortality in-hospital after elective cardiac surgery, including EuroSCORE II, a logistic regression model and a machine learning model, were compared by ROC and DCA. Of the 6,520 patients having elective cardiac surgery with cardiopulmonary bypass, 6.3% died. Mean age was 63.4 years old (standard deviation 14.4), and mean EuroSCORE II was 3.7 (4.8) %. The area under ROC curve (IC95%) for the machine learning model (0.795 (0.755–0.834)) was significantly higher than EuroSCORE II or the logistic regression model (respectively, 0.737 (0.691–0.783) and 0.742 (0.698–0.785), p < 0.0001). Decision Curve Analysis showed that the machine learning model, in this monocentric study, has a greater benefit whatever the probability threshold.

Conclusions: According to ROC and DCA, machine learning model is more accurate in predicting mortality after elective cardiac surgery than EuroSCORE II. These results confirm the use of machine learning methods in the field of medical prediction.

Comparação entre um modelo de Machine Learning e EuroSCOREII na previsão de mortalidade após cirurgia cardíaca eletiva