Interpretabilidade versus Desempenho: Ceticismo e AI-Winter

Neste post do Michael Elad que é editor chefe da SIAM da publicação Journal on Imaging Sciences ele faz uma série de reflexões bem ponderadas de como os métodos de Deep Learning estão resolvendo problemas reais e alcançando um alto grau de visibilidade mesmo com métodos não tão elegantes dentro da perspectiva matemática.

Ele coloca o ponto principal de que, no que tange o processamento de imagens, a academia teve sempre um lugar de destaque em uma abordagem na qual a interpretabilidade e o entendimento do modelos sempre teve precedência em relação aos resultados alcançados.

Isso fica claro no parágrafo abaixo:

A series of papers during the early 2000s suggested the successful application of this architecture, leading to state-of-the-art results in practically any assigned task. Key aspects in these contributions included the following: the use of many network layers, which explains the term “deep learning;” a huge amount of data on which to train; massive computations typically run on computer clusters or graphic processing units; and wise optimization algorithms that employ effective initializations and gradual stochastic gradient learning. Unfortunately, all of these great empirical achievements were obtained with hardly any theoretical understanding of the underlying paradigm. Moreover, the optimization employed in the learning process is highly non-convex and intractable from a theoretical viewpoint.

No final ele coloca uma visão sobre pragmatismo e agenda acadêmica:

Should we be happy about this trend? Well, if we are in the business of solving practical problems such as noise removal, the answer must be positive. Right? Therefore, a company seeking such a solution should be satisfied. But what about us scientists? What is the true objective behind the vast effort that we invested in the image denoising problem? Yes, we do aim for effective noise-removal algorithms, but this constitutes a small fraction of our motivation, as we have a much wider and deeper agenda. Researchers in our field aim to understand the data on which we operate. This is done by modeling information in order to decipher its true dimensionality and manifested phenomena. Such models serve denoising and other problems in image processing, but far more than that, they allow identifying new ways to extract knowledge from the data and enable new horizons.

Isso lembra uma passagem minha na RCB Investimentos quando eu trabalhava com o grande Renato Toledo no mercado de NPL em que ele me ensinou que bons modelos têm um alto grau de interpretabilidade e simplicidade, no qual esse fator deve ser o barômetro da tomada de decisão, dado que um modelo cujo a sua incerteza (ou erro) seja conhecido é melhor do que um modelo que ninguém sabe o que está acontecendo (Nota pessoal: Quem me conhece sabe que eu tenho uma frase sobre isso que é: se você não entende a dinâmica do modelo quando ele funciona, nunca vai saber o que deu errado quando ele falhar.)

Contudo é inegável que as redes Deep Learning estão resolvendo, ao meu ver, uma demanda reprimida de problemas que já existiam e que os métodos computacionais não conseguiam resolver de forma fácil, como reconhecimento facial, classificação de imagens, tradução, e problemas estruturados como fraude (a Fast.AI está fazendo um ótimo trabalho de clarificar isso).

Em que pese o fato dos pesquisadores de DL terem hardware infinito a preços módicos, o fato brutal é que esse campo de pesquisa durante aproximadamente 30 anos engoliu uma pílula bem amarga de ceticismo proveniente da própria academia: seja em colocar esse método em uma esfera de alto ceticismo levando a sua quase total extinção, ou mesmo com alguns jornals implicitamente não aceitarem trabalhos de DL; enquanto matemáticos estavam ganhando prêmios e tendo um alto nível de visibilidade por causa da acurácia dos seus métodos ao invés de uma pretensa ideia de que o mundo gostava da interpretabilidade de seus métodos.

Duas grandes questões estão em tela que são: 1) Será que os matemáticos e comunidades que estão chocadas com esse fenômeno podem aguentar o mesmo que a comunidade de Redes Neurais aguentou por mais de 30 anos? e 2) E em caso de um Math-Winter, a comunidade matemática consegue suportar uma potencial marginalização de sua pesquisa?

É esperar e ver.

 

Interpretabilidade versus Desempenho: Ceticismo e AI-Winter

Modularização do Morfismo de Redes Neurais

Quem foi que disse que não podem ocorrer alterações morfológicas nas arquiteturas/topologias de Redes Neurais?

Modularized Morphing of Neural Networks – Tao Wei, Changhu Wang, Chang Wen Chen

Abstract: In this work we study the problem of network morphism, an effective learning scheme to morph a well-trained neural network to a new one with the network function completely preserved. Different from existing work where basic morphing types on the layer level were addressed, we target at the central problem of network morphism at a higher level, i.e., how a convolutional layer can be morphed into an arbitrary module of a neural network. To simplify the representation of a network, we abstract a module as a graph with blobs as vertices and convolutional layers as edges, based on which the morphing process is able to be formulated as a graph transformation problem. Two atomic morphing operations are introduced to compose the graphs, based on which modules are classified into two families, i.e., simple morphable modules and complex modules. We present practical morphing solutions for both of these two families, and prove that any reasonable module can be morphed from a single convolutional layer. Extensive experiments have been conducted based on the state-of-the-art ResNet on benchmark datasets, and the effectiveness of the proposed solution has been verified.

Conclusions: This paper presented a systematic study on the problem of network morphism at a higher level, and tried to answer the central question of such learning scheme, i.e., whether and how a convolutional layer can be morphed into an arbitrary module. To facilitate the study, we abstracted a modular network as a graph, and formulated the process of network morphism as a graph transformation process. Based on this formulation, both simple morphable modules and complex modules have been defined and corresponding morphing algorithms have been proposed. We have shown that a convolutional layer can be morphed into any module of a network. We have also carried out experiments to illustrate how to achieve a better performing model based on the state-of-the-art ResNet with minimal extra computational cost on benchmark datasets.

Modularização do Morfismo de Redes Neurais

Akid: Uma biblioteca de Redes Neurais para pesquisa e produção

Finalmente começaram a pensar em eliminar esse vale entre ciência/academia e indústria.

Akid: A Library for Neural Network Research and Production from a Dataism Approach – Shuai Li
Abstract: Neural networks are a revolutionary but immature technique that is fast evolving and heavily relies on data. To benefit from the newest development and newly available data, we want the gap between research and production as small as possibly. On the other hand, differing from traditional machine learning models, neural network is not just yet another statistic model, but a model for the natural processing engine — the brain. In this work, we describe a neural network library named {\texttt akid}. It provides higher level of abstraction for entities (abstracted as blocks) in nature upon the abstraction done on signals (abstracted as tensors) by Tensorflow, characterizing the dataism observation that all entities in nature processes input and emit out in some ways. It includes a full stack of software that provides abstraction to let researchers focus on research instead of implementation, while at the same time the developed program can also be put into production seamlessly in a distributed environment, and be production ready. At the top application stack, it provides out-of-box tools for neural network applications. Lower down, akid provides a programming paradigm that lets user easily build customized models. The distributed computing stack handles the concurrency and communication, thus letting models be trained or deployed to a single GPU, multiple GPUs, or a distributed environment without affecting how a model is specified in the programming paradigm stack. Lastly, the distributed deployment stack handles how the distributed computing is deployed, thus decoupling the research prototype environment with the actual production environment, and is able to dynamically allocate computing resources, so development (Devs) and operations (Ops) could be separated. 

Akid: Uma biblioteca de Redes Neurais para pesquisa e produção

Alpha Go: O maior avanço no campo de Redes Neurais e Inteligência Artificial de 2016

Sem sombra de dúvidas o maior avanço de 2016 para a Inteligência Artificial/ Redes Neurais foi a vitória do Alpha Go sobre o Lee Sedol no jogo de Go.

Diferentemente da época do Deep Blue que derrotou o Gary Kasparov usando uma versão do algoritmo de busca exaustiva com um poder computacional muito alto na época (i.e. a cada movimento do jogo o Deep Blue calculava todas as possibilidades, e através de uma função de avaliação de cada resultado usava o resultado como heurística para o próximo movimento) .

Pequeno vídeo sobre o confronto:

Better Computer Go Player with Neural Network and Long-term Prediction

Yuandong Tian, Yan Zhu

Competing with top human players in the ancient game of Go has been a long-term goal of artificial intelligence. Go’s high branching factor makes traditional search techniques ineffective, even on leading-edge hardware, and Go’s evaluation function could change drastically with one stone change. Recent works [Maddison et al. (2015); Clark & Storkey (2015)] show that search is not strictly necessary for machine Go players. A pure pattern-matching approach, based on a Deep Convolutional Neural Network (DCNN) that predicts the next move, can perform as well as Monte Carlo Tree Search (MCTS)-based open source Go engines such as Pachi [Baudis & Gailly (2012)] if its search budget is limited. We extend this idea in our bot named darkforest, which relies on a DCNN designed for long-term predictions. Darkforest substantially improves the win rate for pattern-matching approaches against MCTS-based approaches, even with looser search budgets. Against human players, the newest versions, darkfores2, achieve a stable 3d level on KGS Go Server as a ranked bot, a substantial improvement upon the estimated 4k-5k ranks for DCNN reported in Clark & Storkey (2015) based on games against other machine players. Adding MCTS to darkfores2 creates a much stronger player named darkfmcts3: with 5000 rollouts, it beats Pachi with 10k rollouts in all 250 games; with 75k rollouts it achieves a stable 5d level in KGS server, on par with state-of-the-art Go AIs (e.g., Zen, DolBaram, CrazyStone) except for AlphaGo [Silver et al. (2016)]; with 110k rollouts, it won the 3rd place in January KGS Go Tournament.

Conclusão

In this paper, we have substantially improved the performance of DCNN-based Go AI, extensively evaluated it against both open source engines and strong amateur human players, and shown its potentials if combined with Monte-Carlo Tree Search (MCTS). Ideally, we want to construct a system that combines both pattern matching and search, and can be trained jointly in an online fashion. Pattern matching with DCNN is good at global board reading, but might fail to capture special local situations. On the other hand, search is excellent in modeling arbitrary situations, by building a local non-parametric model for the current state, only when the computation cost is affordable. One paradigm is to update DCNN weights (i.e., Policy Gradient [Sutton et al. (1999)]) after MCTS completes and chooses a different best move than DCNN’s proposal. To increase the signal bandwidth, we could also update weights using all the board situations along the trajectory of the best move. Alternatively, we could update the weights when MCTS is running. Actor-Critics algorithms [Konda & Tsitsiklis (1999)] can also be used to train two models simultaneously, one to predict the next move (actor) and the other to evaluate the current board situation (critic). Finally, local tactics training (e.g., Life/Death practice) focuses on local board situation with fewer variations, which DCNN approaches should benefit from like human players.

Alpha Go: O maior avanço no campo de Redes Neurais e Inteligência Artificial de 2016

Redes Neurais de Profundidade Estocástica

Um ótimo artigo de como a prática continua sendo uma ótima professora em relação ao tratamento de métodos metaheurísticos.

Why is that a big deal? The biggest impediment in applying deep learning (or for that matter any S/E process) in product development is turnaround time. If I spend 1 week training my model and _then_ find it is a pile of shit, because I did not initialize something well or the architecture was missing something, that’s not good. For this reason, everyone I know wants to get the best GPUs or work on the biggest clusters — not just it lets them build more expressive networks but simply they’re super fast. So, any technique that improves experiment turnaround time is welcome!

The idea is ridiculously simple (perhaps why it is effective?): randomly skip layers while training. As a result you have a network that has expected depth really small, while the maximum depth can be in the order of 1000s. In effect, like dropout training, this creates an ensemble model from the 2^L2Lpossible networks for an LL-layer deep network.

Redes Neurais de Profundidade Estocástica

Redes Neurais e Redes Neurais Profundas

Como o assunto está muito quente foi feito até um livro online (com códigos e tudo mais):

Neural Networks and Deep Learning is a free online book. The book will teach you about:

Neural networks, a beautiful biologically-inspired programming paradigm which enables a computer to learn from observational data

Deep learning, a powerful set of techniques for learning in neural networks

Neural networks and deep learning currently provide the best solutions to many problems in image recognition, speech recognition, and natural language processing. This book will teach you the core concepts behind neural networks and deep learning.

The book is currently an incomplete beta draft. More chapters will be added over the coming months. For now, you can:

Read Chapter 1, which explains how neural networks can learn to recognize handwriting

Read Chapter 2, which explains backpropagation, the most important algorithm used to learn in neural networks.

Read Chapter 3, which explains many techniques which can be used to improve the performance of backpropagation.

Read Chapter 4, which explains why neural networks can compute any function.

Learn more about the approach taken in this book

Redes Neurais e Redes Neurais Profundas

Aplicações de Deep Learning e desafios e Big Data Analytics

Uma coisa interessante nesse artigo, foi que é um dos poucos que tem uma estratégia para Deep Learning não baseada em algoritmos, mas sim em indexação semântica. 

Abstract

Big Data Analytics and Deep Learning are two high-focus of data science. Big Data has become important as many organizations both public and private have been collecting massive amounts of domain-specific information, which can contain useful information about problems such as national intelligence, cyber security, fraud detection, marketing, and medical informatics. Companies such as Google and Microsoft are analyzing large volumes of data for business analysis and decisions, impacting existing and future technology. Deep Learning algorithms extract high-level, complex abstractions as data representations through a hierarchical learning process. Complex abstractions are learnt at a given level based on relatively simpler abstractions formulated in the preceding level in the hierarchy. A key benefit of Deep Learning is the analysis and learning of massive amounts of unsupervised data, making it a valuable tool for Big Data Analytics where raw data is largely unlabeled and un-categorized. In the present study, we explore how Deep Learning can be utilized for addressing some important problems in Big Data Analytics, including extracting complex patterns from massive volumes of data, semantic indexing, data tagging, fast information retrieval, and simplifying discriminative tasks. We also investigate some aspects of Deep Learning research that need further exploration to incorporate specific challenges introduced by Big Data Analytics, including streaming data, high-dimensional data, scalability of models, and distributed computing. We conclude by presenting insights into relevant future works by posing some questions, including defining data sampling criteria, domain adaptation modeling, defining criteria for obtaining useful data abstractions, improving semantic indexing, semi-supervised learning, and active learning.

s40537-014-0007-7

Aplicações de Deep Learning e desafios e Big Data Analytics

Básico sobre Deep Learning

Nessa entrevista do Arno Candel da H2o para o Kdnuggets, ele resume o que é o Deep Learning:

[…]Deep Learning methods use a composition of multiple non-linear transformations to model high-level abstractions in data. Multi-layer feed-forward artificial neural networks are some of the oldest and yet most useful such techniques. We are now reaping the benefits of over 60 years of evolution in Deep Learning that began in the late 1950s when the term Machine Learning was coined. Large parts of the growing success of Deep Learning in the past decade can be attributed to Moore’s law and the exponential speedup of computers, but there were also many algorithmic breakthroughs that enabled robust training of deep learners.

Compared to more interpretable Machine Learning techniques such as tree-based methods, conventional Deep Learning (using stochastic gradient descentand back-propagation) is a rather “brute-force” method that optimizes lots of coefficients (it is a parametric method) starting from random noise by continuously looking at examples from the training data. It follows the basic idea of “(good) practice makes perfect” (similar to a real brain) without any strong guarantees on the quality of the model. […]

 

Neste trecho ele fala de algumas aplicações de Deep Learning:

[…]Deep Learning is really effective at learning non-linear derived featuresfrom the raw input features, unlike standard Machine Learning methods such as linear or tree-based methods. For example, if age and income are the two features used to predict spending, then a linear model would greatly benefit from manually splitting age and income ranges into distinct groups; while a tree-based model would learn to automatically dissect the two-dimensional space.

A Deep Learning model builds hierarchies of (hidden) derived non-linear features that get composed to approximate arbitrary functions such as sqrt((age-40)^2+0.3*log(income+1)-4) with much less effort than with other methods. Traditionally, data scientists perform many of these transformations explicitly based on domain knowledge and experience, but Deep Learning has been shown to be extremely effective at coming up with those transformations, often outperforming standard Machine Learning models by a substantial margin.

Deep Learning is also very good at predicting high-cardinality class memberships, such as in image or voice recognition problems, or in predicting the best item to recommend to a user. Another strength of Deep Learning is that it can also be used for unsupervised learning where it just learns the intrinsic structure of the data without making predictions (remember the Google cat?). This is useful in cases where there are no training labels, or for various other use cases such as anomaly detection. […]

 

 

Básico sobre Deep Learning

Overview sobre Deep Neural Networks (Redes Neurais Artificiais em Profundidade)

No Random Ponderings tem um overview (que já é quase uma introdução completa) sobre Redes Neurais Artificiais em Profundidade.

Esse tema não é relativamente novo, mas tem tomado um grande espaço na literatura atual devido ao fato que essa técnica vem solucionando importantes problemas ligados à classificação, principalmente em relação à aproximação por imagens.

Essas redes nada mais são do que redes em que os parâmetros de entrada na rede são codificados em que não é somente o atributo é modelado, mas sub-conjuntos de atributo-valor são introduzidos na rede; e posteriormente entre 20-60 camadas escondidas (camadas intermediárias) são introduzidas para treinamento da rede e ajuste do calculo dos pesos.

Abaixo algumas heurísticas de como treinar as RNA’s em Profundidade:

  • Get the data: Make sure that you have a high-quality dataset of input-output examples that is large, representative, and has relatively clean labels. Learning is completely impossible without such a dataset.
  • Preprocessing: it is essential to center the data so that its mean is zero and so that the variance of each of its dimensions is one. Sometimes, when the input dimension varies by orders of magnitude, it is better to take the log(1 + x) of that dimension. Basically, it’s important to find a faithful encoding of the input with zero mean and sensibly bounded dimensions. Doing so makes learning work much better. This is the case because the weights are updated by the formula: change in wij \propto xidL/dyj (w denotes the weights from layer x to layer y, and L is the loss function). If the average value of the x’s is large (say, 100), then the weight updates will be very large and correlated, which makes learning bad and slow. Keeping things zero-mean and with small variance simply makes everything work much better.
  • Minibatches: Use minibatches. Modern computers cannot be efficient if you process one training case at a time. It is vastly more efficient to train the network on minibatches of 128 examples, because doing so will result in massively greater throughput. It would actually be nice to use minibatches of size 1, and they would probably result in improved performance and lower overfitting; but the benefit of doing so is outweighed the massive computational gains provided by minibatches. But don’t use very large minibatches because they tend to work less well and overfit more. So the practical recommendation is: use the smaller minibatch that runs efficiently on your machine.
  • Gradient normalization: Divide the gradient by minibatch size. This is a good idea because of the following pleasant property: you won’t need to change the learning rate (not too much, anyway), if you double the minibatch size (or halve it).
  • Learning rate schedule: Start with a normal-sized learning rate (LR) and reduce it towards the end.
    • A typical value of the LR is 0.1. Amazingly, 0.1 is a good value of the learning rate for a large number of neural networks problems. Learning rates frequently tend to be smaller but rarely much larger.
    • Use a validation set —- a subset of the training set on which we don’t train — to decide when to lower the learning rate and when to stop training (e.g., when error on the validation set starts to increase).
    • A practical suggestion for a learning rate schedule: if you see that you stopped making progress on the validation set, divide the LR by 2 (or by 5), and keep going. Eventually, the LR will become very small, at which point you will stop your training. Doing so helps ensure that you won’t be (over-)fitting the training data at the detriment of validation performance, which happens easily and often. Also, lowering the LR is important, and the above recipe provides a useful approach to controlling via the validation set.
  • But most importantly, worry about the Learning Rate. One useful idea used by some researchers (e.g., Alex Krizhevsky) is to monitor the ratio between the update norm and the weight norm. This ratio should be at around 10-3. If it is much smaller then learning will probably be too slow, and if it is much larger then learning will be unstable and will probably fail.
  • Weight initialization. Worry about the random initialization of the weights at the start of learning.
    • If you are lazy, it is usually enough to do something like 0.02 * randn(num_params). A value at this scale tends to work surprisingly well over many different problems. Of course, smaller (or larger) values are also worth trying.
    • If it doesn’t work well (say your neural network architecture is unusual and/or very deep), then you should initialize each weight matrix with the init_scale / sqrt(layer_width) * randn. In this case init_scale should be set to 0.1 or 1, or something like that.
    • Random initialization is super important for deep and recurrent nets. If you don’t get it right, then it’ll look like the network doesn’t learn anything at all. But we know that neural networks learn once the conditions are set.
    • Fun story: researchers believed, for many years, that SGD cannot train deep neural networks from random initializations. Every time they would try it, it wouldn’t work. Embarrassingly, they did not succeed because they used the “small random weights” for the initialization, which works great for shallow nets but simply doesn’t work for deep nets at all. When the nets are deep, the many weight matrices all multiply each other, so the effect of a suboptimal scale is amplified.
    • But if your net is shallow, you can afford to be less careful with the random initialization, since SGD will just find a way to fix it.

    You’re now informed. Worry and care about your initialization. Try many different kinds of initialization. This effort will pay off. If the net doesn’t work at all (i.e., never “gets off the ground”), keep applying pressure to the random initialization. It’s the right thing to do.

  • If you are training RNNs or LSTMs, use a hard constraint over the norm of the gradient (remember that the gradient has been divided by batch size). Something like 15 or 5 works well in practice in my own experiments. Take your gradient, divide it by the size of the minibatch, and check if its norm exceeds 15 (or 5). If it does, then shrink it until it is 15 (or 5). This one little trick plays a huge difference in the training of RNNs and LSTMs, where otherwise the exploding gradient can cause learning to fail and force you to use a puny learning rate like 1e-6 which is too small to be useful.
  • Numerical gradient checking: If you are not using Theano or Torch, you’ll be probably implementing your own gradients. It is easy to make a mistake when we implement a gradient, so it is absolutely critical to use numerical gradient checking. Doing so will give you a complete peace of mind and confidence in your code. You will know that you can invest effort in tuning the hyperparameters (such as the learning rate and the initialization) and be sure that your efforts are channeled in the right direction.
  • If you are using LSTMs and you want to train them on problems with very long range dependencies, you should initialize the biases of the forget gates of the LSTMs to large values. By default, the forget gates are the sigmoids of their total input, and when the weights are small, the forget gate is set to 0.5, which is adequate for some but not all problems. This is the one non-obvious caveat about the initialization of the LSTM.
  • Data augmentation: be creative, and find ways to algorithmically increase the number of training cases that are in your disposal. If you have images, then you should translate and rotate them; if you have speech, you should combine clean speech with all types of random noise; etc. Data augmentation is an art (unless you’re dealing with images). Use common sense.
  • Dropout. Dropout provides an easy way to improve performance. It’s trivial to implement and there’s little reason to not do it. Remember to tune the dropout probability, and to not forget to turn off Dropout and to multiply the weights by (namely by 1-dropout probability) at test time. Also, be sure to train the network for longer. Unlike normal training, where the validation error often starts increasing after prolonged training, dropout nets keep getting better and better the longer you train them. So be patient.
  • Ensembling. Train 10 neural networks and average their predictions. It’s a fairly trivial technique that results in easy, sizeable performance improvements. One may be mystified as to why averaging helps so much, but there is a simple reason for the effectiveness of averaging. Suppose that two classifiers have an error rate of 70%. Then, when they agree they are right. But when they disagree, one of them is often right, so now the average prediction will place much more weight on the correct answer. The effect will be especially strong whenever the network is confident when it’s right and unconfident when it’s wrong.

 

Overview sobre Deep Neural Networks (Redes Neurais Artificiais em Profundidade)

Redes Neurais Disjuntivas

O paper é seminal (ou seja precisa ser revisado com um pouco mais de cautela), mas representa um bom avanço na utilização das RNAs, tendo em vista que as Random Forests (Florestas Aleatórias) e as Support Vector Machines (Máquinas de Vetor de Suporte) estão apresentando resultados bem melhores, academicamente falando.

Abaixo o resumo do artigo:

Artificial neural networks are powerful pattern classifiers; however, they have been surpassed in accuracy by methods such as support vector machines and random forests that are also easier to use and faster to train. Backpropagation, which is used to train artificial neural networks, suffers from the herd effect problem which leads to long training times and limit classification accuracy. We use the disjunctive normal form and approximate the boolean conjunction operations with products to construct a novel network architecture. The proposed model can be trained by minimizing an error function and it allows an effective and intuitive initialization which solves the herd-effect problem associated with backpropagation. This leads to state-of-the art classification accuracy and fast training times. In addition, our model can be jointly optimized with convolutional features in an unified structure leading to state-of-the-art results on computer vision problems with fast convergence rates. A GPU implementation of LDNN with optional convolutional features is also available.

 

Redes Neurais Disjuntivas

Uma abordagem Walk-Through Deep Learning

As Redes Neurais Artificias (RNAs) são modelos computacionais de aprendizado de máquina que mimetizam alguns aspectos da atividade cognitiva humana (e.g. memória (atividade de armazenar e recuperar oportunamente dados), percepção(realiza relações de significância através de experiências passadas) e raciocínio (capacidade de gerar operações lógicas de informações abstratas)), é uma ótima técnica para atividades de classificação e regressão.

Já a o Deep Learning é o conjunto de múltiplas tarefas de aprendizado de máquina que lida com diferentes tipos de abstração, e que a sua característica mais marcante é a resolução de problemas a partir de estruturas de representação dos dados de diferentes maneiras.

Neste vídeo do Sungwook Yoon ele realiza uma comparação entre essas duas estruturas de forma competente.

 

Uma abordagem Walk-Through Deep Learning

Michael Jordan (Não o do basquete) fala sobre alguns tópicos em Aprendizado de Máquina e sobre Big Data

Abaixo está o depoimento mais sensato sobre alguns assuntos relativos à análise de dados, Data Mining, e principalmente Big Data.

UPDATE: O próprio MJordan deu uma entrevista dizendo que em alguns pontos foi mal interpretado. No entanto, cabe ressaltar que muito do que é importante na fala ele não falou nada a respeito; então tirem as suas conclusões.

Para quem não sabe, o Michael Jordan (IEEE) é uma das maiores autoridades no que diz respeito em aprendizado de máquina no mundo acadêmico.

Esta entrevista (que foi sonegada por este espaço por puro desleixo) ele apresenta argumentos extremamente sóbrios e lúcidos sobre Deep Learning (que terá um tópico aqui em breve) e principalmente sobre o Big Data.

Sobre a parte de Big Data em especial, esses comentários convidam à uma reflexão, e acima de tudo colocam pontos que merecem ser discutidos sobre esse fenômeno.

Obviamente empresas do calibre da Google, Amazon, Yahoo, e alguns projetos como Genoma podem ter benefício de grandes volumes de dados. O problema principal é que todo essa hipsterização em torno do Big Data parece muito mais algo orientado ao marketing do que a resolução de questões de negócio pertinentes.

Seguem alguns trechos importantes:

Sobre Deep Learning, simplificações e afins…

IEEE Spectrum: I infer from your writing that you believe there’s a lot of misinformation out there about deep learning, big data, computer vision, and the like.

Michael Jordan: Well, on all academic topics there is a lot of misinformation. The media is trying to do its best to find topics that people are going to read about. Sometimes those go beyond where the achievements actually are. Specifically on the topic of deep learning, it’s largely a rebranding of neural networks, which go back to the 1980s. They actually go back to the 1960s; it seems like every 20 years there is a new wave that involves them. In the current wave, the main success story is the convolutional neural network, but that idea was already present in the previous wave. And one of the problems with both the previous wave, that has unfortunately persisted in the current wave, is that people continue to infer that something involving neuroscience is behind it, and that deep learning is taking advantage of an understanding of how the brain processes information, learns, makes decisions, or copes with large amounts of data. And that is just patently false.

Spectrum: It’s always been my impression that when people in computer science describe how the brain works, they are making horribly reductionist statements that you would never hear from neuroscientists. You called these “cartoon models” of the brain.

Michael Jordan: I wouldn’t want to put labels on people and say that all computer scientists work one way, or all neuroscientists work another way. But it’s true that with neuroscience, it’s going to require decades or even hundreds of years to understand the deep principles. There is progress at the very lowest levels of neuroscience. But for issues of higher cognition—how we perceive, how we remember, how we act—we have no idea how neurons are storing information, how they are computing, what the rules are, what the algorithms are, what the representations are, and the like. So we are not yet in an era in which we can be using an understanding of the brain to guide us in the construction of intelligent systems.

Sobre Big Data

Spectrum: If we could turn now to the subject of big data, a theme that runs through your remarks is that there is a certain fool’s gold element to our current obsession with it. For example, you’ve predicted that society is about to experience an epidemic of false positives coming out of big-data projects.

Michael Jordan: When you have large amounts of data, your appetite for hypotheses tends to get even larger. And if it’s growing faster than the statistical strength of the data, then many of your inferences are likely to be false. They are likely to be white noise.

Spectrum: How so?

Michael Jordan: In a classical database, you have maybe a few thousand people in them. You can think of those as the rows of the database. And the columns would be the features of those people: their age, height, weight, income, et cetera.

Now, the number of combinations of these columns grows exponentially with the number of columns. So if you have many, many columns—and we do in modern databases—you’ll get up into millions and millions of attributes for each person.

Now, if I start allowing myself to look at all of the combinations of these features—if you live in Beijing, and you ride bike to work, and you work in a certain job, and are a certain age—what’s the probability you will have a certain disease or you will like my advertisement? Now I’m getting combinations of millions of attributes, and the number of such combinations is exponential; it gets to be the size of the number of atoms in the universe.

Those are the hypotheses that I’m willing to consider. And for any particular database, I will find some combination of columns that will predict perfectly any outcome, just by chance alone. If I just look at all the people who have a heart attack and compare them to all the people that don’t have a heart attack, and I’m looking for combinations of the columns that predict heart attacks, I will find all kinds of spurious combinations of columns, because there are huge numbers of them.

So it’s like having billions of monkeys typing. One of them will write Shakespeare.

Spectrum:Do you think this aspect of big data is currently underappreciated?

Michael Jordan: Definitely.

Spectrum: What are some of the things that people are promising for big data that you don’t think they will be able to deliver?

Michael Jordan: I think data analysis can deliver inferences at certain levels of quality. But we have to be clear about what levels of quality. We have to have error bars around all our predictions. That is something that’s missing in much of the current machine learning literature.

Spectrum: What will happen if people working with data don’t heed your advice?

Michael Jordan: I like to use the analogy of building bridges. If I have no principles, and I build thousands of bridges without any actual science, lots of them will fall down, and great disasters will occur.

Similarly here, if people use data and inferences they can make with the data without any concern about error bars, about heterogeneity, about noisy data, about the sampling pattern, about all the kinds of things that you have to be serious about if you’re an engineer and a statistician—then you will make lots of predictions, and there’s a good chance that you will occasionally solve some real interesting problems. But you will occasionally have some disastrously bad decisions. And you won’t know the difference a priori. You will just produce these outputs and hope for the best.

And so that’s where we are currently. A lot of people are building things hoping that they work, and sometimes they will. And in some sense, there’s nothing wrong with that; it’s exploratory. But society as a whole can’t tolerate that; we can’t just hope that these things work. Eventually, we have to give real guarantees. Civil engineers eventually learned to build bridges that were guaranteed to stand up. So with big data, it will take decades, I suspect, to get a real engineering approach, so that you can say with some assurance that you are giving out reasonable answers and are quantifying the likelihood of errors.

Spectrum: Do we currently have the tools to provide those error bars?

Michael Jordan: We are just getting this engineering science assembled. We have many ideas that come from hundreds of years of statistics and computer science. And we’re working on putting them together, making them scalable. A lot of the ideas for controlling what are called familywise errors, where I have many hypotheses and want to know my error rate, have emerged over the last 30 years. But many of them haven’t been studied computationally. It’s hard mathematics and engineering to work all this out, and it will take time.

It’s not a year or two. It will take decades to get right. We are still learning how to do big data well.

Spectrum: When you read about big data and health care, every third story seems to be about all the amazing clinical insights we’ll get almost automatically, merely by collecting data from everyone, especially in the cloud.

Michael Jordan: You can’t be completely a skeptic or completely an optimist about this. It is somewhere in the middle. But if you list all the hypotheses that come out of some analysis of data, some fraction of them will be useful. You just won’t know which fraction. So if you just grab a few of them—say, if you eat oat bran you won’t have stomach cancer or something, because the data seem to suggest that—there’s some chance you will get lucky. The data will provide some support.

But unless you’re actually doing the full-scale engineering statistical analysis to provide some error bars and quantify the errors, it’s gambling. It’s better than just gambling without data. That’s pure roulette. This is kind of partial roulette.

Spectrum: What adverse consequences might await the big-data field if we remain on the trajectory you’re describing?

Michael Jordan: The main one will be a “big-data winter.” After a bubble, when people invested and a lot of companies overpromised without providing serious analysis, it will bust. And soon, in a two- to five-year span, people will say, “The whole big-data thing came and went. It died. It was wrong.” I am predicting that. It’s what happens in these cycles when there is too much hype, i.e., assertions not based on an understanding of what the real problems are or on an understanding that solving the problems will take decades, that we will make steady progress but that we haven’t had a major leap in technical progress. And then there will be a period during which it will be very hard to get resources to do data analysis. The field will continue to go forward, because it’s real, and it’s needed. But the backlash will hurt a large number of important projects.

Michael Jordan (Não o do basquete) fala sobre alguns tópicos em Aprendizado de Máquina e sobre Big Data

Reproducible Research with R and RStudio – Livro sobre Pesquisa Reprodutível

Ainda sobre o assunto da reprodução de pesquisas, está em vias de ser lançado um livro sobre o assunto chamado Reproducible Research with R and RStudio escrito por Christopher Gandrud.

No enxerto do livro o autor disponibiliza 5 dicas práticas para criação/reprodução de pesquisas que são:

  1. Document everything!,
  2. Everything is a (text) file,
  3. All files should be human readable,
  4. Explicitly tie your files together,
  5. Have a plan to organize, store, and make your files available.

 

 

Reproducible Research with R and RStudio – Livro sobre Pesquisa Reprodutível

Replicação em Pesquisa Acadêmica em Mineração de Dados

Lendo este post do John Taylor sobre a replicação da pesquisa econômica publicada até em journals de alto impacto lembrei de uma prática bem comum em revistas acadêmicas da área de Engenharia de Produção e Mineração de Dados que é a irreprodutibilidade dos artigos publicados.

Essa irreprodutibilidade se dá na forma em que se conseguem os resultados, em especial, de técnicas como Clustering, Regras de Associação, e principalmente Redes Neurais.

Um trabalho acadêmico/técnico/experimental que não pode ser reproduzido é a priori 1) metodologicamente fraco, e 2) pessimamente revisado. Trabalhos com essas características tem tanto suporte para o conhecimento como a chamada evidência anedótica.

Depois de ler mais de 150 papers em 2012 (e rumo aos 300 em 2013) a estrutura não muda:

  • Introdução;
  • Revisão Bibliográfica;
  • Aplicação da Técnica;
  • Resultados; e
  • Discussão na qual fala que teve  ganho de 90% em redes neurais.

Há um check-list bem interessante para analisar um artigo acadêmico com um péssimo DOE, e mal fundamentado metologicamente:

Artigos de Clustering 

  • Qual foi o tamanho da amostra?;
  • Qual é o tamanho mínimo da amostra dentro da população estimada?
  • Foram realizados testes estatísticos sobre a população como teste-Z ou ANOVA?
  • Qual é o P-Valor?
  • Qual foi a técnica para a determinação da separação dos clusters?
  • Quais os parâmetros foram usados para a clusterização?
  • Porque foi escolhido o algoritmo Z?

Artigos de Regras de Associação

  • Qual foi o suporte mínimo?
  • Qual é o tamanho da amostra e o quanto ela é representativa estatisticamente de acordo com a população?
  • O quanto o SUPORTE representa a POPULAÇÃO dentro do seu estudo?
  • Como foi realizado o prunning as regras acionáveis?
  • A amostra é generalizável? Porque não foi realizado o experimento em TODA a população?

Redes Neurais

  • Qual é a arquitetura da rede?
  • Porque foi utilizada a função de ativação Tangente e não a Hiperbólica (ou vice-versa)?
  • A função de ativação é adequada para os dados que estão sendo estudados? Como foi feito o pré-processamento e a discretização dos dados?
  • Porque foi escolhida o número de camadas internas?
  • Tem taxa de aprendizado? Qual foi e porque foi determinada essa taxa?
  • Tem decaímento (Decay)? Porque?
  • E o momentum? Foi utilizado? Com quais parâmetros?
  • Qual estrutura de custos está vinculada nos resultados? Qual foi a quantidade de erros tipo I e II que foram realizados pela rede?
  • E o número de épocas? Como foi determinada e em qual momento a rede deixou de convergir? Você acha que é um erro mínimo global ou local? Como você explica isso no resultado do artigo

Pode parecer algo como o desconstrucionismo acadêmico fantasiado de exame crítico em um primeiro momento mas para quem vive em um meio no qual estudos mais do que fraudulentos são pintados como revolucionários é um recurso como um escudo contra besteiras (Bullshit Shield).

Em suma, com 50% das respostas das perguntas acima o risco de ser um paper ruim com resultados do tipo “caixa-preta” já caí para 10% e aí entra o verdadeiro trabalho de análise para a reprodução do artigo.

Abaixo um vídeo bem interessante sobre papers que nada mais passam de evidência anedótica.

Replicação em Pesquisa Acadêmica em Mineração de Dados

Aplicação de Mineração de Dados no Mercado Financeiro – Application of data mining techniques in stock markets

Ehsan Hajizadeh, Hamed Davari Ardakani e Jamal Shahrabi, todos da Amirkabir University of Technology no Irã trazem nesse paper uma boa abordagem de idéias de aplicações de mineração de dados no mercado financeiro.

Aos moldes do que faz o ótimo livro do Roberto Pontes que já foi resenhado aqui, os autores colocam um leque de possibilidades bem interessantes com as técnicas de mineração de dados, no qual não somente a mineração de dados será uma ferramenta de análise exploratória e reconhecimento de padrões, como colocam as técnicas como forma de se analisar tendências futuras para melhorar a análise de ativos.

Como os autores bem colocam, o paper vem a preencher uma lacuna na literatura sobre a aplicação de mineração de dados, principalmente no que vai além da dupla árvore de decisão e rede neural.

As técnicas elencadas pelos autores foram: Árvore de Decisão (alternativas de decisão), Redes Neurais (avaliação paramétrica), Agrupamento – Clustering – (observação de dinâmicas de características dos ativos financeiros, análise de fator (avaliação de variáveis e a influência de cada um sobre um modelo de predição), regras de associação (relacionamento entre os ativos de acordo com as características da base de dados), Séries Temporais (análise de tendência e predição).

Para quem deseja engajar-se em um projeto sério de análise de dados financeiros, sem dúvidas esse  artigo traz uma luz bem oportuna ao assunto, e pode auxiliar em pesquisas neste aspecto.

Application of data mining techniques in stock markets

Aplicação de Mineração de Dados no Mercado Financeiro – Application of data mining techniques in stock markets

Inteligência Artificial nos Investimentos

Um bom livro para quem deseja iniciar estudos ou mesmo quem já possuí algum tipo de experiência em Inteligência Artificial/Aprendizado de Máquina é o Inteligência Artificial nos Investimentos de Roberto Pontes.

O livro é self-publishing (o autor é o próprio editor) o qual a primeira vista pode gerar algum tipo de desconfiança pelos que julgam o livro pela editora; mas ao desenrolar do livro é fácil perceber que se trata de uma grande obra no tema.

Inteligência Artificial nos Investimentos tem uma abordagem simples; porém, fica longe das abordagens simplistas que são oferecidas em diversos cursos e tutoriais na internet; e ao longo do seu desenvolvimento apresenta abordagens híbridas; porém, sem perder em nenhum instante o foco principal do livro que é apresentar IA aplicada aos investimentos.

O livro é de leitura agradável e tem como característica marcante uma abordagem bem prática em relação ao que é apresentado pelos papers acadêmicos que acabam tornando o assunto complexo. Uma passagem que me agrada muito, é encontrada nas páginas 69/70 na qual o autor lança uma crítica pertinente sobre os papers e alguns trabalhos em aplicação de redes neurais no mercado financeiro.

Alguns pontos baixos do livro: Não sei se foi uma opção do autor, mas as figuras estão em preto em branco o que prejudica um pouco a visualização dos gráficos de comparação, e alguns erros gramaticais que em nada prejudica a leitura.

O livro é um bom manual para desenvolvimento de trabalhos nesse segmento, e o preço (por volta dos R$ 30) é justissímo; e o fato de ser self-publishing garante um trabalho de qualidade e acessível.

Recomendado para quem deseja ver literatura de qualidade com how-to de fato e em língua portuguesa, além de aplicação prática do que é proposto com escopo bem definido.

Não recomendado para quem gosta de academicismos, e não tem familiaridade com o mercado financeiro.

Para comprar: http://www.clubedeautores.com.br/book/48995–Inteligencia_Artificial_nos_Investimentos

 Site do Livro: http://www.neuroinvest.blogspot.com.br/

Inteligência Artificial nos Investimentos

Livro Discovering Knowledge in Data: An Introduction to Data Mining

É sempre difícil comentar sobre um livro o qual se vê claramente que é um dos textbooks que podem receber a denominação de clássico. O livro do Dr. Daniel Larose Discovering Knowledge in Data: An introduction to Data Mining ( ISBN-10: 0471666572 | ISBN-13: 978-0471666578) é um ótimo livro para quem busca conhecer de forma introdutória a Mineração de Dados; bem como quer fugir do lugar comum no qual dezenas de livros sobre o assunto levam.

O autor inicia o livro realizando uma série de overviews bastante pertinentes em relação a elucidação das tarefas de mineração de dados, até mesmo apresentando o CRISP-DM através de casos de estudos.

Após isso, o livro entra na questão do Pre-processamento de dados, e explica de forma bem concisa o conceito de Análise Exploratória de Dados (Exploratory Data Analysis – EDA) .

Nos capítulos subsequentes o autor desvenda através de abordagens conceituais e práticas as Abordagens Estatísticas para Estimação e Predição, Algoritmo k-Nearest Neighbor , Árvores de Decisão, Redes Neurais, Técnicas de Agrupamento e Regras de Associação.

O mais valioso no livro sem sombra de dúvidas é o capitulo 11-Model Evaluation Techniques (Modelo de Técnicas de Avaliação) no qual mostra alguns dos atributos de um projeto de Mineração de Dados mais negligenciados que são Índice de Erros, Falsos Positivos e Falsos Negativos; que através de exemplos práticos em capitulos anteriores mostram uma forma prática de se avaliar os modelos de mineração.

Este livro é para: Cursos Introdutórios de Mineração de Dados, Interessados em Análise Hands-On em Mineração de Dados, Estudantes de Banco de Dados, Entusiastas de Mineração de Dados, Cursos de Graduação sobre a Disciplina de Banco de Dados/Mineração de Dados.

Este livro NÃO é para: Desenvolvimento de projetos complexos de Mineração de Dados, Aprendizado de Técnicas Avançadas em Mineração de Dados, quem não gosta/entende representações matemáticas.

Pontos Positivos: Facilidade de Leitura, Roteiro de assuntos abordados pelo autor, explicações teóricas sem prolixidade, abordagem prática.

Pontos Negativos: Abordagem matemática do autor em determinadas sessões que pode confundir o leitor menos habituado ao tipo de leitura, Tamanho (pouco mais de 220 páginas).

Livro Discovering Knowledge in Data: An Introduction to Data Mining