Tuning via hiper-parametrização para Máquinas de Vetor de Suporte (Support Vector Machines) por estimação de distribuição de algoritmos

Em épocas de Deep Learning, é sempre bom ver um paper com as boas e velhas Máquinas de Vetor de Suporte (Support Vector Machines). Em breve teremos um post sobre essa técnica aqui no blog.

Hyper-Parameter Tuning for Support Vector Machines by Estimation of Distribution Algorithms

Abstract: Hyper-parameter tuning for support vector machines has been widely studied in the past decade. A variety of metaheuristics, such as Genetic Algorithms and Particle Swarm Optimization have been considered to accomplish this task. Notably, exhaustive strategies such as Grid Search or Random Search continue to be implemented for hyper-parameter tuning and have recently shown results comparable to sophisticated metaheuristics. The main reason for the success of exhaustive techniques is due to the fact that only two or three parameters need to be adjusted when working with support vector machines. In this chapter, we analyze two Estimation Distribution Algorithms, the Univariate Marginal Distribution Algorithm and the Boltzmann Univariate Marginal Distribution Algorithm, to verify if these algorithms preserve the effectiveness of Random Search and at the same time make more efficient the process of finding the optimal hyper-parameters without increasing the complexity of Random Search.

Tuning via hiper-parametrização para Máquinas de Vetor de Suporte (Support Vector Machines) por estimação de distribuição de algoritmos

Redes Neurais Disjuntivas

O paper é seminal (ou seja precisa ser revisado com um pouco mais de cautela), mas representa um bom avanço na utilização das RNAs, tendo em vista que as Random Forests (Florestas Aleatórias) e as Support Vector Machines (Máquinas de Vetor de Suporte) estão apresentando resultados bem melhores, academicamente falando.

Abaixo o resumo do artigo:

Artificial neural networks are powerful pattern classifiers; however, they have been surpassed in accuracy by methods such as support vector machines and random forests that are also easier to use and faster to train. Backpropagation, which is used to train artificial neural networks, suffers from the herd effect problem which leads to long training times and limit classification accuracy. We use the disjunctive normal form and approximate the boolean conjunction operations with products to construct a novel network architecture. The proposed model can be trained by minimizing an error function and it allows an effective and intuitive initialization which solves the herd-effect problem associated with backpropagation. This leads to state-of-the art classification accuracy and fast training times. In addition, our model can be jointly optimized with convolutional features in an unified structure leading to state-of-the-art results on computer vision problems with fast convergence rates. A GPU implementation of LDNN with optional convolutional features is also available.

 

Redes Neurais Disjuntivas