Learning Scalable Deep Kernels with Recurrent Structure

Abstract: Many applications in speech, robotics, finance, and biology deal with sequential data, where ordering matters and recurrent structures are common. However, this structure cannot be easily captured by standard kernel functions. To model such structure, we propose expressive closed-form kernel functions for Gaussian processes. The resulting model, GP-LSTM, fully encapsulates the inductive biases of long short-term memory (LSTM) recurrent networks, while retaining the non-parametric probabilistic advantages of Gaussian processes. We learn the properties of the proposed kernels by optimizing the Gaussian process marginal likelihood using a new provably convergent semi-stochastic gradient procedure, and exploit the structure of these kernels for scalable training and prediction. This approach provides a practical representation for Bayesian LSTMs. We demonstrate state-of-the-art performance on several benchmarks, and thoroughly investigate a consequential autonomous driving application, where the predictive uncertainties provided by GP- LSTM are uniquely valuable.
Discussion: We proposed a method for learning kernels with recurrent long short-term memory structure on sequences. Gaussian processes with such kernels, termed the GP-LSTM, have the structure and learning biases of LSTMs, while retaining a probabilistic Bayesian nonparametric representation. The GP-LSTM outperforms a range of alternatives on several sequence-toreals regression tasks. The GP-LSTM also works on data with low and high signal-to-noise ratios, and can be scaled to very large datasets, all with a straightforward, practical, and generally applicable model specification. Moreover, the semi-stochastic scheme proposed in our paper is provably convergent and efficient in practical settings, in conjunction with structure exploiting algebra. In short, the GP-LSTM provides a natural mechanism for Bayesian LSTMs, quantifying predictive uncertainty while harmonizing with the standard deep learning toolbox. Predictive uncertainty is of high value in robotics applications, such as autonomous driving, and could also be applied to other areas such as financial modeling and computational biology.
Learning Scalable Deep Kernels with Recurrent Structure

Financial Series – Prediction of Stock Market Index Movement by Ten Data Mining Techniques

Esse artigo escrito por Phichhang OuHengshan Wang ambos da University of Shanghai apresenta um estudo sobre a aplicação de dez técnicas de Mineração de Dados aplicado a predição dos índices relativos à bolsa de valores de Hong Kong.

O artigo tem como idéia principal realizar uma análise experimental e comparativa sobre dez técnicas de Mineração de Dados (Linear discriminant analysis (LDA), Quadratic discriminant analysis (QDA), K-nearest neighbor classification, Naïve Bayes based on kernel estimation, Logit model, Tree based classification, Neural Network, Bayesian Classification with Gaussian Process, Support Vector Machine (SVM) e Least Squares Support Vector Machine (LS-SVM)) na qual os pesquisadores realizam uma série de ajustes no modelo para cálculo da flutuação do índice ao longo do estudo.

Como resultado do estudo os autores chegaram à conclusão que a maioria das técnicas aplicadas tiveram um hit rate acima de 80%, o que é um ótimo sinal dado o número imenso de variáveis a serem consideradas e o grau de dificuldade de mapeamento do domínio.

Em geral o artigo é bem escrito e dá uma perspectiva muito interessante em modelagem matemática aplicada a esse tipo de domínio. O único ponto contra é que o artigo poderia ter o método de cross-validation mais bem descrito, e claro o conteúdo matemático é uma barreira para os iniciantes; mas nada que um pouco de dedicação pessoal não possa superar.

Prediction of Stock Market Index Movement by Ten Data Mining Techniques

Financial Series – Prediction of Stock Market Index Movement by Ten Data Mining Techniques