Data Science: Como agentes reguladores, professores e praticantes estão fazendo isso errado

Esse post da Data Robot é um daqueles tipos de post que mostra muito como a evolução das plataformas de Big Data, aliado com um maior arsenal computacional e preditivo estão varrendo para baixo do tapete qualquer bullshit disfarçado com tecnicalidades em relação à Data Science.

Vou reproduzir na íntegra, pois vale a pena usar esse post quando você tiver que justificar a qualquer burocrata de números (não vou dar nome aos bois dado o butthurt que isso poderia causar) porque ninguém mais dá a mínima para P-Valor, testes de hipóteses, etc na era em que temos uma abundância de dados; e principalmente está havendo a morte da significância estatística.

“Underpinning many published scientific conclusions is the concept of ‘statistical significance,’ typically assessed with an index called the p-value. While the p-value can be a useful statistical measure, it is commonly misused and misinterpreted.”  ASA Statement on Statistical Significance and p-Values

If you’ve ever heard the words “statistically significant” or “fail to reject,” then you are among the countless thousands who have been traumatized by an academic approach building predictive models.  Unfortunately, I can’t claim innocence in this matter.  I taught statistics when I was in grad school, and I do have a Ph.D. in applied statistics.  I was born into the world that uses formal hypothesis testing to justify every decision made in the model building process:

Should I include this variable in my model?  How about an F-test?

Do my two samples have different means?  Student’s t-test!

Does my model fit my data?  Why not try the Hosmer–Lemeshow test or maybe use the Cramér–von Mises criterion?

Are my variables correlated?  How about a test using a Pearson Correlation Coefficient?

And on, and on, and on, and on…

These tests are all based on various theoretical assumptions.  If the assumptions are valid, then they allegedly tell you whether or not your results are “statistically significant.”

Over the last century, as businesses and governments have begun to incorporate data science into their business processes, these “statistical tests” have also leaked into commercial and regulatory practices.

For instance, federal regulators in the banking industry issued this tortured guidance in 2011:

“… statistical tests depend on specific distributional assumptions and the purpose of the model… Any single test is rarely sufficient, so banks should apply a variety of tests to develop a sound model.”

In other words, statistical tests have lots of assumptions that are often (always) untrue, so use lots of them. (?!)

Here’s why statistical significance is a waste of time

statistical-significance

If assumptions are invalid, the tests are invalid — even if your model is good

I developed a statistical test of my very own for my dissertation.  The procedure for doing this is pretty simple.  First, you make some assumptions about independence and data distributions, and variance, and so on.  Then, you do some math that relies (heavily) on these assumptions in order to come up with a p-value. The p-value tells you what decision to make.

As an example, let’s take linear regression.  Every business stats student memorizes the three assumptions associated with the p-values in this approach: independence (for which no real test exists), constant variance, and normality.  If all these assumptions aren’t met, then none of the statistical tests that you might do are valid; yet regulators, professors, scientists, and statisticians all expect you to rely (heavily) on these tests.

What’s are you to do if your assumptions are invalid?  In practice, the general practice is to wave your hands about “robustness” or some such thing and then continue along the same path.

If your data is big enough, EVERYTHING is significant

“The primary product of a research inquiry is one or more measures of effect size, not P values.” Jacob Cohen

As your data gets bigger and bigger (as data tends to do these days), everything becomes statistically significant.  On one hand, this makes intuitive sense.  For example, the larger a dataset is, the most likely an F-test is to tell you that your GLM coefficients are nonzero; i.e., larger datasets can support more complex models, as expected.  On the other hand, for many assumption validity tests — e.g., tests for constant variance — statistical significance indicates invalid assumptions.  So, for big datasets, you end up with tests telling you every feature is significant, but assumption tests telling you to throw out all of your results.

Validating assumptions is expensive and doesn’t add value

Nobody ever generated a single dollar of revenue by validating model assumptions (except of course the big consulting firms that are doing the work).  No prospect was converted; no fraud was detected; no marketing message was honed by the drudgery of validating model assumptions.  To make matters worse, it’s a never ending task.  Every time a model is backtested, refreshed, or evaluated, the same assumption-validation-song-and-dance has to happen again.  And that’s assuming that the dozens of validity tests don’t give you inconsistent results.  It’s a gigantic waste of resources because there is a better way.

You can cheat, and nobody will ever know

Known as data dredging, data snooping, or p-hacking, it is very easy and relatively undetectable to manufacture statistically significant results.  Andrew Gelman observed that most modelers have a (perverse) incentive to produce statistically significantresults — even at the expense of reality.  It’s hardly surprising that these techniques exist, given the pressure to produce valuable data driven solutions.  This risk, on its own, should be sufficient reason to abandon p-values entirely in some settings, like financial services, where cheating could result in serious consequences for the economy.

If the model is misspecified, then your p-values are likely to be misleading

Suppose you’re investigating whether or not a gender gap exists in America.  Lots of things are correlated with gender; e.g., career choice, hours worked per week, percentage of vacation taken, participation in a STEM career, and so on.  To the extent that any of these variables are excluded from your investigation — whether you know about them or not — the significance of gender will be overstated.  In other words, statistical significance will give the impression that a gender gap exists, when it may not — simply due to model misspecification.

Only out-of-sample accuracy matters

Whether or not results are statistically significant is the wrong question.  The only metric that actually matters when building models is whether or not your models can make accurate predictions on new data.  Not only is this metric difficult to fake, but it also perfectly aligns with the business motivation for building the model in the first place.  Fraud models that do a good job predicting fraud actually prevent losses.  Underwriting models that accurately segment credit risk really do increase profits.  Optimizing model accuracy instead of identifying statistical significance makes good business sense.

Over the course of the last few decades lots and lots of tools have been developed outside of the hypothesis testing framework.  Cross-validation, partial dependence, feature importance, and boosting/bagging methods are just some of the tools in the machine learning toolbox.  They provide a means not only for ensuring out-of-sample accuracy, but also understanding which features are important and how complex models work.

A survey of these methods is out of scope, but let me close with a final point.  Unlike traditional statistical methods, tasks like cross-validation, model tuning, feature selection, and model selection are highly automatable.  Custom coded solutions of any kind are inherently error prone, even for the most experienced data scientist

Many of the world’s biggest companies are recognizing that bespoke models, hand-built by Ph.D.’s are too slow and expensive to develop and maintain.  Solutions like DataRobot provide a way for business experts to build predictive models in a safe, repeatable, systematic way that yields business value much more quickly and much cheaper than other approaches.

By Greg Michaelson, Director – DataRobot Labs

Data Science: Como agentes reguladores, professores e praticantes estão fazendo isso errado

Um post demolidor do Stephen Few sobre o Big Data

Contrariando os departamentos de marketing dos grandes vendedores de software, o Stephen Few vem travando uma guerra quase que pessoal contra a indústria do Big Data.

Como esse termo que é mais comentado nas redes sociais e no marketing do que é praticado em campo (como eu chamo esses verdadeiros soldados da ciência de dados como o Luti, Erickson Ricci, Big Leka, Fabiano Amorim, Fabrício Lima, Marcos Freccia, entre outros) há uma entropia de opiniões e conceitos. Com essa entropia quem perde são somente os desinformados que não conseguem separar o sinal do ruído que acabam virando presas fáceis de produtos com qualidade duvidosa.

A vítima da vez foi o livro Dataclysm do Christian Rudder.

Em um dado momento do livro, o autor realiza um tipo de criticismo ao processo científico em que alguns pesquisadores das ciências do comportamento aplicadas utilizam seus alunos como amostra, e o autor de forma quase que pedante chama essas pesquisas de WEIRD (White, Educated, Industrialized, Rich and Democratic). Em tradução livre uma brincadeira com o acrônimo da palavra “Esquisita” em inglês como uma espécie de conotação pejorativa.

I understand how it happens: in person, getting a real representative data set is often more difficult than the actual experiment you’d like to perform. You’re a professor or postdoc who wants to push forward, so you take what’s called a “convenience sample”—and that means the students at your university. But it’s a big problem, especially when you’re researching belief and behavior. It even has a name: It’s called WEIRD research: white, educated, industrialized, rich, and democratic. And most published social research papers are WEIRD.

O que poderia ser um criticismo de um autor que tem como background os méritos em ser um dos co-fundadores do OKCupid, vira em uma leitura mais cuidadosa da exposição de uma lacuna em relação à análise de dados e pior: expõe um erro de entendimento em relação à teoria da amostragem (nada que uma leitura atenciosa do livro dos professores Bolfarine e Bussab não solucionasse).

E a resposta do Stephen Few é demolidora:

Rudder is a co-founder of the online dating service OKCupid. As such, he has access to an enormous amount of data that is generated by the choices that customers make while seeking romantic connections. Add to this the additional data that he’s collected from other social media sites, such as Facebook and Twitter, and he has a huge data set. Even though the people who use these social media sites are more demographically diverse than WEIRD college students, they don’t represent society as a whole. Derek Ruths of McGill University and Jürgen Pfeffer of Carnegie Mellon University recently expressed this concern in an article titled “Social Medial for Large Studies of Behavior,” published in the November 28, 2014 issue of Science. Also, the conditions under which the data was collected exercise a great deal of influence, but Rudder has “stripped away” most of this context.

Lição #1: Demografia não é sinal de diversidade em análise de dados.

Após esse trecho vem uma fala do Stephen Few que mostra de maneira bem sutil o arsenal retórico dos departamentos de marketing para convencer pessoas inteligentes em investir em algo que elas não entendem que é a poesia do entendimento; e uma outra situação mais grave: acreditar que os dados online em que somos perfis falam de maneira exata quem somos.

Contrary to his disclaimers about Big Data hype, Rudder expresses some hype of his own. Social media Big Data opens the door to a “poetry…of understanding. We are at the cusp of momentous change in the study of human communication.” He believes that the words people write on these sites provide the best source of information to date about the state and nature of human communication. I believe, however, that this data source reveals less than Rudder’s optimistic assessment. I suspect that it mostly reveals what people tend to say and how they tend to communicate on these particular social media sites, which support specific purposes and tend to be influenced by technological limitations—some imposed (e.g., Twitter’s 140 character limit) and others a by-product of the input device (e.g., the tiny keyboard of a smartphone). We can certainly study the effects that these technological limitations have on language, or the way in which anonymity invites offensive behavior, but are we really on the “cusp of momentous change in the study of human communication”? To derive useful insights from social media data, we’ll need to apply the rigor of science to our analyses just as we do with other data sources.

Lição #2: Entender o viés amostral, sempre irá reduzir a chance de más generalizações.

Lição #3: Contextos específicos não são generalizáveis (i.e. indução não é a mesma coisa que dedução).

E por último o autor fala uma pérola que merece estar em um panteão de bullshits (como esse da Bastter.com que é o maior combatente do bullshit midiático e de marketing do Brasil). É necessário que os leitores mais sensíveis a ausência de raciocínio lógico-cientifico segurem-se com o que vem aí. Segurem-se porque essa afirmação é forte:

“With Big Data we no longer need to adhere to the basic principles of science.”

 “Com Big Data não precisaremos aderir os princípios básicos da ciência”

A resposta, mais uma demolição:

Sourcing data from the wild rather than from controlled experiments in the lab has always been an important avenue of scientific study. These studies are observational rather than experimental. When we do this, we must carefully consider the many conditions that might affect the behavior that we’re observing. From these observations, we carefully form hypotheses, and then we test them, if possible, in controlled experiments. Large social media data sets don’t alleviate the need for this careful approach. I’m not saying that large stores of social media data are useless. Rather, I’m saying that if we’re going to call what we do with it data science, let’s make sure that we adhere to the principles and practices of science. How many of the people who call themselves “data scientists” on resumes today have actually been trained in science? I don’t know the answer, but I suspect that it’s relatively few, just as most of those who call themselves “data analysts” of some type or other have not been trained in data analysis. No matter how large the data source, scientific study requires rigor. This need is not diminished in the least by data volume. Social media data may be able to reveal aspects of human behavior that would be difficult to observe in any other way. We should take advantage of this. However, we mustn’t treat social media data as magical, nor analyze it with less rigor than other sources of data. It is just data. It is abundantly available, but it’s still just data.

Utilizando a mesma lógica contida na argumentação, não precisamos de ensaios randomizados para saber se um determinado remédio ou mesmo tipo de paradigma de alimentação está errado; podemos esquecer questões como determinação amostral, a questão das hipóteses, ou mesmo conceitos básicos de randomização amostral, ou mesmo verificar especificidades da população para generalizar conclusões, ou sequer considerar erros aleatórios ou flutuações estatísticas.

Apenas pegue dados de redes sociais e generalize.

Lição #4: Volume não significa nada sem significância amostral.

Lição #5: Independente da fonte dos dados, ainda continuam sendo dados. E sempre devem ser tratados com rigor.

Haverá alguns posts sobre essa questão amostral, mas o mais importante são as lições que podemos tirar desses que eu considero inocentes a serviço da desinformação.

Um post demolidor do Stephen Few sobre o Big Data

Quando o ruído vira sinal?

Uma das principais características do mau-jornalismo que está sendo feito (e com isso prejudicando os bons profissionais) é ver sinal onde é essencialmente ruído.

Essa “notícia” do Yahoo é um exemplo claro disso.

Nela os autores pegam uma frase solta e infeliz do Stephen Hawking e colocaram o sabor de sensacionalismo necessário para ganhar clicks para os anunciantes.

A frase foi:

“As forças primitivas de inteligência artificial que já temos demonstraram ser muito úteis”, admite Stephen Hawking que, vitimado por uma distrofia neuromuscular, se expressa através de um computador.

“Mas penso que o completo desenvolvimento da inteligência artificial poderia significar o fim da raça humana”, declarou, em entrevista recente à BBC.

O campo da Inteligência Artificial e as suas ramificações como a Inteligência Computacional, e as Heurísticas e Meta-Heurísticas estão desempenhando um ótimo trabalho na evolução do mundo como vivemos hoje.

Não é preciso ver a IA como algo muito longe. Ela está em lugares como:

Sistemas de Recomendação.

Educação.

Meteorologia.

Trabalhos perigosos para seres humanos.

Emissão de Poluentes.

Indústria Aeroespacial.

Setor Bancário.

Segurança e controle de fronteiras.

Medicina (Assistente cirúrgico).

Só alguns exemplos bem simples do que a IA já está fazendo nos dias de hoje.

Dizer que o desenvolvimento da IA levaria a destruição da raça humana, seria a mesma coisa de culpar a química pelo poder de destruição das bombas atômicas, essa sim uma ameaça bem mais séria mas que não tem um apelo dramático-jornalístico.

Para ler: Applicability of Artificial Intelligence in Different Fields of Life

Quando o ruído vira sinal?