The Impact of Random Models on Clustering Similarity

Abstract: Clustering is a central approach for unsupervised learning. After clustering is applied, the most fundamental analysis is to quantitatively compare clusterings. Such comparisons are crucial for the evaluation of clustering methods as well as other tasks such as consensus clustering. It is often argued that, in order to establish a baseline, clustering similarity should be assessed in the context of a random ensemble of clusterings. The prevailing assumption for the random clustering ensemble is the permutation model in which the number and sizes of clusters are fixed. However, this assumption does not necessarily hold in practice; for example, multiple runs of K-means clustering reurns clusterings with a fixed number of clusters, while the cluster size distribution varies greatly. Here, we derive corrected variants of two clustering similarity measures (the Rand index and Mutual Information) in the context of two random clustering ensembles in which the number and sizes of clusters vary. In addition, we study the impact of one-sided comparisons in the scenario with a reference clustering. The consequences of different random models are illustrated using synthetic examples, handwriting recognition, and gene expression data. We demonstrate that the choice of random model can have a drastic impact on the ranking of similar clustering pairs, and the evaluation of a clustering method with respect to a random baseline; thus, the choice of random clustering model should be carefully justified.
Discussion: Given the prevalence of clustering methods for analyzing data, clustering comparison is a fundamental problem that is pertinent to numerous areas of science. In particular, the correction of clustering similarity for chance serves to establish a baseline that facilitates comparisons between different clustering solutions. Expanding previous studies on the selection of an appropriate model for random clusterings (Meila, 2005; Vinh et al., 2009; Romano et al., 2016), our work provides an extensive summary of random models and clearly demonstrates the strong impact of the random model on the interpretation of clustering results.
Our results underpin the importance of selecting the appropriate random model for a
given context. To that end, we offer the following guidelines: 1. Consider what is fixed by the clustering method: do all clusterings have a user specified number of clusters (use Mnum), or is the cluster size sequence fixed (use Mperm)? 2. Is the comparison against a reference clustering (use a one-sided comparison), or are you comparing two derived clusterings (then use a two-sided comparison)? The specific comparisons studied here are not meant to establish the superiority of a particular clustering identification technique or a specific random clustering model, rather, they illustrate the importance of the choice of the random model. Crucially, conclusions based on corrected similarity measures can change depending on the random model for clusterings. Therefore, previous studies which did promote methods based on evidence from corrected similarity measures should be re-evaluated in the context of the appropriate random model for clusterings (Yeung et al., 2001; de Souto et al., 2008; Yeung and Ruzzo, 2001; Thalamuthu et al., 2006; McNicholas and Murphy, 2010).
The Impact of Random Models on Clustering Similarity

A gentle introduction to DBSCAN

From the series “Beyond the K-Means clustering“…

Density Based Spatial Clustering of Applications with Noise (DBSCAN)

By Abhijit Annaldas, Microsoft.

DBSCAN is a different type of clustering algorithm with some unique advantages. As the name indicates, this method focuses more on the proximity and density of observations to form clusters. This is very different from KMeans, where an observation becomes a part of cluster represented by nearest centroid. DBSCAN clustering can identify outliers, observations which won’t belong to any cluster. Since DBSCAN clustering identifies the number of clusters as well, it is very useful with unsupervised learning of the data when we don’t know how many clusters could be there in the data.

K-Means clustering may cluster loosely related observations together. Every observation becomes a part of some cluster eventually, even if the observations are scattered far away in the vector space. Since clusters depend on the mean value of cluster elements, each data point plays a role in forming the clusters. Slight change in data points might affect the clustering outcome. This problem is greatly reduced in DBSCAN due to the way clusters are formed.

A gentle introduction to DBSCAN

A good paper to be applied in Brazil – Predicting Public Corruption with Neural Networks

Predicting Public Corruption with Neural Networks: An Analysis of Spanish Provinces

Abstract We contend that corruption must be detected as soon as possible so that corrective and preventive measures may be taken. Thus, we develop an early warning system based on a neural network approach, specifically self-organizing maps, to predict public corruption based on economic and political factors. Unlike previous research, which is based on the perception of corruption, we use data on actual cases of corruption. We apply the model to Spanish provinces in which actual cases of corruption were reported by the media or went to court between 2000 and 2012. We find that the taxation of real estate, economic growth, the increase in real estate prices, the growing number of deposit institutions and non-financial firms, and the same political party remaining in power for long periods seem to induce public corruption. Our model provides different profiles of corruption risk depending on the economic conditions of a region conditional on the timing of the prediction. Our model also provides different time frameworks to predict corruption up to 3 years before cases are detected.

Concluding Remarks We develop a model of neural networks to predict public corruption based on economic and political factors. We apply this model to the Spanish provinces in which corrupt cases have been uncovered by the media or have gone to trial. Unlike previous research, which is based on the perception of corruption, we use data on actual cases of corruption. The output of our model is a set of SOMs, which allow us to predict corruption in different time scenarios before corruption cases are detected. Our model provides two main insights. First, we identify some underlying economic and political factors that can result in public corruption. Taxation of real estate, economic growth, and an increase in real estate prices, in the number of deposit institutions, and the same party remaining in office for a long time seem to induce public corruption. Second, our model provides different time frameworks to predict corruption. In some regions, we are able to detect latent corruption long before it emerges (up to 3 years), and in other regions our model provides short-term alerts, and suggests the need to take urgent preventive or corrective measures. Given the connection we find between economic and political factors and public corruption, some caveats must be applied to our results. Our model does not mean that economic growth or a given party remaining in power causes public corruption but that the fastest growing regions or the ones ruled by the same party for a long time are the most likely to be involved in corruption cases. Economic growth per se is not a sign of corruption, but rather it increases the interactions between economic agents and public officers. Similarly, being in office too long might prove to be an incentive for creating a network of unfair relations between politicians and economic agents. In addition, more competitive markets may induce some agents to pay bribes in order to obtain public concessions or a better competitive position. These results are consistent with some research exploring the relation between economic growth and corruption (Kuo et al. 2002; Kaufman and Rousseeuw 2009; Chen et al. 2002). Since corruption remains a widespread global concern, a key issue in our research is the generalizability of our model and the proposed actions. We have used fairly common macroeconomic and political variables that are widely available from public sources in many countries. In turn, our model can be applied to other regions and countries as well. Of course, the model could be improved if a country or region-specific factors were taken into account. Our approach is interesting both for academia and public authorities. For academia, we provide an innovative way to predict public corruption using neural networks. These methods have often been used to predict corporate financial distress and other economic events, but, as far as we are aware, no studies have yet attempted to use neural networks to predict public corruption. Consequently, we extend the domain of neural network application. For public authorities, we provide a model that improves the efficiency of the measures aimed at fighting corruption. Because the resources available to combat corruption are limited, authorities can use the early corruption warning system, which categorizes each province according to its corruption profile, in order to narrow their focus and better implement preventive and corrective policies. In addition, our model predicts corruption cases long before they are discovered, which enhances anticipatory measures. Our model can be especially relevant in countries suffering the severest corruption problems. In fact, European Union authorities are highly concerned about widespread corruption in certain countries. The study of new methodologies based on neural networks is a fertile field to be applied to a number of legal and economic issues. One possible direction for future research is to extend our model to the international framework and to take into account country-specific factors. Another application may be the detection of patterns of corruption and money laundering across different countries in the European Union.

A good paper to be applied in Brazil – Predicting Public Corruption with Neural Networks

Feature Screening in Large Scale Cluster Analysis

Mais trabalhos sobre clustering.

Feature Screening in Large Scale Cluster Analysis – Trambak Banerjee, Gourab Mukherjee, Peter Radchenko

Abstract: We propose a novel methodology for feature screening in clustering massive datasets, in which both the number of features and the number of observations can potentially be very large. Taking advantage of a fusion penalization based convex clustering criterion, we propose a very fast screening procedure that efficiently discards non-informative features by first computing a clustering score corresponding to the clustering tree constructed for each feature, and then thresholding the resulting values. We provide theoretical support for our approach by establishing uniform non-asymptotic bounds on the clustering scores of the “noise” features. These bounds imply perfect screening of non-informative features with high probability and are derived via careful analysis of the empirical processes corresponding to the clustering trees that are constructed for each of the features by the associated clustering procedure. Through extensive simulation experiments we compare the performance of our proposed method with other screening approaches, popularly used in cluster analysis, and obtain encouraging results. We demonstrate empirically that our method is applicable to cluster analysis of big datasets arising in single-cell gene expression studies.

Conclusions: We propose COSCI, a novel feature screening method for large scale cluster analysis problems that are characterized by both large sample sizes and high dimensionality of the observations. COSCI efficiently ranks the candidate features in a non-parametric fashion and, under mild regularity conditions, is robust to the distributional form of the true noise coordinates. We establish theoretical results supporting ideal feature screening properties of our proposed procedure and provide a data driven approach for selecting the screening threshold parameter. Extensive simulation experiments and real data studies demonstrate encouraging performance of our proposed approach. An interesting topic for future research is extending our marginal screening method by means of utilizing multivariate objective criteria, which are more potent in detecting multivariate cluster information among marginally unimodal features. Preliminary analysis of the corresponding `2 fusion penalty based criterion, which, unlike the `1 based approach used in this paper, is non-separable across dimensions, suggests that this criterion can provide a way to move beyond marginal screening.

Feature Screening in Large Scale Cluster Analysis

K-Means distribuído sobre dados binários comprimidos

E quem disse que o K-Means estava morto hein?

Distributed K-means over Compressed Binary Data

Abstract—We consider a network of binary-valued sensors with a fusion center. The fusion center has to perform K-means clustering on the binary data transmitted by the sensors. In order to reduce the amount of data transmitted within the network, the sensors compress their data with a source coding scheme based on LDPC codes. We propose to apply the K-means algorithm directly over the compressed data without reconstructing the original sensors measurements, in order to avoid potentially complex decoding operations. We provide approximated expressions of the error probabilities of the K-means steps in the compressed domain. From these expressions, we show that applying the Kmeans algorithm in the compressed domain enables to recover the clusters of the original domain. Monte Carlo simulations illustrate the accuracy of the obtained approximated error probabilities, and show that the coding rate needed to perform K-means clustering in the compressed domain is lower than the rate needed to reconstruct all the measurements.

Conclusion: In this paper, we considered a network of sensors which transmit their compressed binary measurements to a fusion center. We proposed to apply the K-means algorithm directly over the compressed data, without reconstructing the sensor measurements. From a theoretical analysis and Monte Carlo simulations, we showed the efficiency of applying K-means in the compressed domain. We also showed that the rate needed to perform K-means on the compressed vectors is lower than the rate needed to reconstruct all the measurements.

K-Means distribuído sobre dados binários comprimidos

Introdução sobre Análise de Cluster

Por mais que a análise exploratória de dados ocupe um espaço muito grande em relação em problemas de ciência de dados, os métodos de aprendizado não-supervisionados ainda tem o seu valor, mesmo que nas comunidades científicas e profissionais pouco se fala sobre esse tema com a mesma recorrência dos métodos preditivos.

Uma das técnicas mais subestimadas em machine learning é a técnica de clustering (ou análise de agrupamento).

Esse post do Kunal Jain trás um dos melhores reviews sobre análise de cluster e as suas peculiaridades.

Connectivity models: As the name suggests, these models are based on the notion that the data points closer in data space exhibit more similarity to each other than the data points lying farther away. These models can follow two approaches. In the first approach, they start with classifying all data points into separate clusters & then aggregating them as the distance decreases. In the second approach, all data points are classified as a single cluster and then partitioned as the distance increases. Also, the choice of distance function is subjective. These models are very easy to interpret but lacks scalability for handling big datasets. Examples of these models are hierarchical clustering algorithm and its variants.
Centroid models: These are iterative clustering algorithms in which the notion of similarity is derived by the closeness of a data point to the centroid of the clusters. K-Means clustering algorithm is a popular algorithm that falls into this category. In these models, the no. of clusters required at the end have to be mentioned beforehand, which makes it important to have prior knowledge of the dataset. These models run iteratively to find the local optima.
Distribution models: These clustering models are based on the notion of how probable is it that all data points in the cluster belong to the same distribution (For example: Normal, Gaussian). These models often suffer from overfitting. A popular example of these models is Expectation-maximization algorithm which uses multivariate normal distributions.
Density Models: These models search the data space for areas of varied density of data points in the data space. It isolates various different density regions and assign the data points within these regions in the same cluster. Popular examples of density models are DBSCAN and OPTICS.

Introdução sobre Análise de Cluster

Agrupamento usando Mean Shift

Neste post do Matt Nedrich fica quase impossível não entender as vantagens de se usar o Mean Shift que é uma técnica pouco conhecida e muito útil no caso de clustering em que o cálculo de cada um dos clusters não é possível.

Agrupamento usando Mean Shift

Não terceirize a Normalização/Padronização dos Dados

Na literatura de mineração de dados muitos autores afirmam que o pré-processamento é uma fase que ocupa 80% do tempo e que os algoritmos são as grandes estrelas com as suas inúmeras formas de descoberta de padrões, regras e associações; e que por isso detêm os 20% mais importantes no que diz respeito à mineração de dados.

Baseado em experiências empíricas acredito que o pré-processamento ocupa 90% do tempo, e que também é responsável por 95% do resultado; dentro da perspectiva da mineração de dados no qual:

Resultado = Custo Computacional  + Consistência dos Resultados + Reprodutibilidade

Onde o custo computacional é definido como:

 Custo Computacional = Custo Espacial + Custo Temporal

Dito isso, vejo que em fóruns e até mesmo palestras que devido ao grau de dificuldade de trabalhar com os dados, ocorre a famosa terceirização da normalização/padronização dos dados. Isso se dá na forma em que os algoritmos/soluções de análise de dados pegam todos os dados e simplesmente ‘fazem a mágica acontecer’ (acreditem, eu vi um ‘Sales Engineer’  de uma das top 3  ferramentas de Business Intelligence (pela Gartner) falar isso).

No entanto, em alguns pré-experimentos com MARS (Multivariate Adaptive Regression Splines) no Statistica 10 e MLPRegression no Weka observei que muitos valores das variáveis de resposta (i.e. valores para realizar o teste e/ou checagem matemática do algoritmo) estavam com números que não faziam sentido algum, e pior: não davam a reprodutibilidade necessária para exercer um simples hindcast (não confundam com backtesting).

Vejamos dois casos práticos a respeito disso.

Neste primeiro caso vejam o resultado do RBF Regressor no Weka que está abaixo:

=== Run information === 
Scheme:       weka.classifiers.functions.RBFClassifier -N 2 -R 0.01 -L 1.0E-6 -C 1 -G -A -P 1 -E 1 -S 1 
 Relation:     WekaExcel-weka.filters.unsupervised.attribute.Remove-R8-weka.filters.unsupervised.attribute.Remove-R1,
 Instances:    671 
 Attributes:   5 
 Test mode:    evaluate on training data 
 === Classifier model (full training set) === 
 Output weights for different classes: 
 -0.658867061591664 0.7268781531574563 
 Unit center: 
 1.103191478913074 0.3187908580844808 1.339867551710916 -2.348360195617461 
 Output weights for different classes: 
 7.294836535867017 -7.294947917203681 
 Unit center: 
 0.001306958680758934 -0.001914844611731498 6.641791379162694E-4 1.0009616503748857 
 Attribute weights: 
 1.2177544598559824 1.0557440195728327 1.6425390340750194 3.7580013072113965 
 Bias weights for different classes: 
 -1.1716428958295801 1.171406635309079 
 Time taken to build model: 0.33 seconds


Até aí nada demais, basta ver qual é o teste dos coeficientes regressores e substituir os coeficientes para posteriormente realizar o cálculo, certo?

Até aí perfeito, mas tem um pequeno problema: todas as variáveis do modelo são nominais e o classificador já fez a normalização os dados. Ou seja, neste exato momento você transformou o modelo em black box, mesmo com todos os coeficientes expostos. Ou seja, a transparência do modelo foi trocada pelo fato da não normalização dos dados.

Neste segundo exemplo vamos ver o resultado de uma RBFNetwork no Weka.

=== Classifier model (full training set) ===
 Radial basis function network
(Linear regression applied to K-means clusters as basis functions):
 Linear Regression Model
 Dummy_Paid =
      -0.2146 * pCluster_0_0 +
      0.2148 * pCluster_0_1 +

Neste caso, estou ainda com as mesmas variáveis nominais.

Uma das propriedades da RBFNetwork no Weka, é que a grosso modo o algoritmo permite a escolha do número de clusters e porteriormente implementa uma função de base radial (i.e. realiza a atribuição de valores partindo do distanciamento de um centro) para realizar a classificação.

Abaixo a descrição do algoritmo:

Class that implements a normalized Gaussian radial basisbasis function network.
It uses the k-means clustering algorithm to provide the basis functions and learns either a logistic regression (discrete class problems)
or linear regression (numeric class problems) on top of that. Symmetric multivariate Gaussians are fit to the data from each cluster. 
If the class is nominal it uses the given number of clusters per class. It standardizes all numeric attributes to zero mean and unit variance.

debug -- If set to true, classifier may output additional info to the console.
ridge -- Set the Ridge value for the logistic or linear regression.
maxIts -- Maximum number of iterations for the logistic regression to perform. Only applied to discrete class problems.
clusteringSeed -- The random seed to pass on to K-means.
minStdDev -- Sets the minimum standard deviation for the clusters.
numClusters -- The number of clusters for K-Means to generate.

A frase mais importante é “It standardizes all numeric attributes to zero mean and unit variance.”

Isso é, o algoritmo realiza a padronização (a re-escala) de todos os atributos numéricos dentro de um range numérico que supõe que os dados estejam em uma distribuição normal padrão que tem média zero e que a unidade variação tem como base a variância.

Sabendo isto, rodei o meu experimento com os seguintes parâmetros.


Nesta imagem, como tinha controle do pré-processamento dos dados propositalmente coloquei a variável de decisão como a geradora dos centroides dos dois clusters. Após isso fiz uma pergunta ao Eibe Frank que é um dos desenvolvedores do Weka sobre o testes dos coeficientes do Output e obtive a seguinte resposta:

pCluster_0_0 and pCluster_0_1 are cluster membership probabilities. RBFNetwork internally uses clusters build using SimpleKMeans. They are not output, so you can’t manually reproduce the classifications generated by RBFNetwork


Ou seja, se não tivesse inserido (ou mesmo a percepção do que o algoritmo fazia) a minha variável de decisão, jamais saberia como o algoritmo chegou ao resultado.

Neste caso o pré-processamento foi feito de uma maneira reprodutível, ou no mínimo rastreável o que ajudou na compreensão do modelo, e de quebra me permitiu ganhar um midset para trabalhar com problemas de função de base radial e clustering. Mais um item no cinto de utilidades do batman minerador de dados.

O grande aprendizado é que quem tem o controle do pré-processamento tem o controle do modelo, seja na velocidade de execução do algoritmo, como também exerce influência nos resultados do modelo. Então, não terceirize a normalização de dados.


Não terceirize a Normalização/Padronização dos Dados

Análise de Whiskies usando K-Means

Uma ótima análise usando K-Means com o R. Mais do que a análise, esse post é uma aula de como proceder com uma análise de cluster usando a determinação arbitrária de clusters como o K-means exige.

Com isso a geração dos resultados e da análise ficam muito mais ‘walk-thru’ e muito menos black-box.

O resultado final?

“[…]The results indicate that there is a lot of variation in flavor profiles within the different scotch whisky regions. Note that initial cluster centers are chosen at random. In order to replicate the results, you will need to run the following code before your analysis.
set.seed(1) Further data analysis would be required to determine whether proximity to types of water sources or terrain types drive common flavor profiles. This could be done by obtaining shape files and adding them as an additional layer to the ggmap plot.
For me, I have identified my next to-try single malt. Talisker is still within the familiar realm of cluster 4 but a little more malty, fruity and spicy. Sounds like the perfect holiday mix. […]”

Análise de Whiskies usando K-Means

Predição de Movimentações Criminais

Um bom artigo sobre a modelagem de eventos criminais e a sua movimentação.

[…] Data available on distance between criminals’ homes and their targets shows that burglars are willing to travel longer distances for high-value targets, and tend to employ different means of transportation to make these long trips. Of course, this tendency differs among types of criminals. Professionals and older criminals may travel further than younger amateurs. A group of professional burglars planning to rob a bank, for instance, would reasonably be expected to follow a Lévy flight.

“There is actually a relationship between how far these criminals are willing to travel for a target and the ability for a hotspot to form,” explain Kolokolnikov and McCalla. […]

Predição de Movimentações Criminais

Agrupamento de Ativos do Mercado Indiano para Administração de Portfólios

Este paper publicado na revista acadêmica Expert Systems with Applications traz um trabalho interessante no qual pesquisadores indianos utilizaram as técnicas de clustering para construção e administração de portfólios de ativos da bolsa de valores da Índia e compararam os resultados com o índice Sensex.

A pesquisa utiliza como parâmetro de seleção de ativos idéias relativas ao artigo Portfolio Selection de Markowitz, no qual a carteira seria composta não somente pelos ativos que tivessem um melhor retorno financeiro, mas que também tivessem um baixo risco.

Partindo desse princípio, as empresas seriam agrupadas em clusters de acordo com alguns indicadores de análise técnica, e em um momento segunte de acordo com o valor do índice de validação dos clusters seriam formados os portfólios com os pesos de cada companhia.

O artigo trás idéias interessantes e o ponto negativo (e que provavelmente não foram apresentados pelos autores por desconhecimento ou abstração) é que fatores técnicos são inadequados para esse tipo de classificação devido ao seu alto volume de transações, bem como a pesquisa é inviável em termos de atualização de dados para alocação de ativos. O artigo se tivesse focado em indicadores fundamentalistas, macroeconômicos e setoriais  para enquadrar a construção e gestão de portfólios apresentaria melhores resultados.

Clustering Indian stock market data for portfolio management

Agrupamento de Ativos do Mercado Indiano para Administração de Portfólios

Básico sobre Segmentação de Clientes em Marketing

Para quem deseja aplica a mineração de dados especificamente em Marketing, este artigo do Inside Data Mining explica características bem interessantes sobre o tema, e como devem ser construídas as métricas de negócios.

Básico sobre Segmentação de Clientes em Marketing

Aplicação de Mineração de Dados no Mercado Financeiro – Application of data mining techniques in stock markets

Ehsan Hajizadeh, Hamed Davari Ardakani e Jamal Shahrabi, todos da Amirkabir University of Technology no Irã trazem nesse paper uma boa abordagem de idéias de aplicações de mineração de dados no mercado financeiro.

Aos moldes do que faz o ótimo livro do Roberto Pontes que já foi resenhado aqui, os autores colocam um leque de possibilidades bem interessantes com as técnicas de mineração de dados, no qual não somente a mineração de dados será uma ferramenta de análise exploratória e reconhecimento de padrões, como colocam as técnicas como forma de se analisar tendências futuras para melhorar a análise de ativos.

Como os autores bem colocam, o paper vem a preencher uma lacuna na literatura sobre a aplicação de mineração de dados, principalmente no que vai além da dupla árvore de decisão e rede neural.

As técnicas elencadas pelos autores foram: Árvore de Decisão (alternativas de decisão), Redes Neurais (avaliação paramétrica), Agrupamento – Clustering – (observação de dinâmicas de características dos ativos financeiros, análise de fator (avaliação de variáveis e a influência de cada um sobre um modelo de predição), regras de associação (relacionamento entre os ativos de acordo com as características da base de dados), Séries Temporais (análise de tendência e predição).

Para quem deseja engajar-se em um projeto sério de análise de dados financeiros, sem dúvidas esse  artigo traz uma luz bem oportuna ao assunto, e pode auxiliar em pesquisas neste aspecto.

Application of data mining techniques in stock markets

Aplicação de Mineração de Dados no Mercado Financeiro – Application of data mining techniques in stock markets

Agrupamento é difícil quando não importa

Esse paper engloba bem os aspectos de como são desenvolvidas métricas para avaliar se um cluster é bom em termos de agrupamento ou não; mas a proposta matemática apresentada está longe de ser trivial.

Clustering is difficult only when it does not matter

Agrupamento é difícil quando não importa

Estudo comparativo entre algoritmos de Análise de Agrupamentos em Data Mining – Fernando Prass

Um dos meus referenciais sobre os estudos em Mineração de Dados foram os trabalhos no blog do Msc. Fernando Prass, no qual ele reuniu dois atributos importantissímos para explicação de uma técnica tão importante 1) simplicidade na abordagem e 2) ausência de ‘lugares comuns’ na abordagem de literatura.

Nessa dissertação de mestrado ele aborda os aspectos comparativos entre os mais diversos algorítmos de agrupamento; e recomendo fortemente a leitura para quem deseja pesquisar mais a fundo essa ramificação da mineração de dados.

Estudo comparativo entre algoritmos de Análise de Agrupamentos em Data Mining – Fernando Prass