Algorithm over Regulations (?)

This scene is the best thing that can I relate to this particular topic.

“But, the bells have already been rung and they’ve heard it. Out in the dark. Among the stars. Ding dong, the God is dead. The bells, cannot be unrung! He’s hungry. He’s found us. And He’s coming!

Ding, ding, ding, ding, ding…”

(Hint Fellas: This is a great time to be not evil and check your models to avoid any kind of discrimination over their current or potential customers.)

European Union regulations on algorithmic decision-making and a “right to explanation – By Bryce Goodman, Seth Flaxman

Abstract: We summarize the potential impact that the European Union’s new General Data Protection Regulation will have on the routine use of machine learning algorithms. Slated to take effect as law across the EU in 2018, it will restrict automated individual decision-making (that is, algorithms that make decisions based on userlevel predictors) which “significantly affect” users. The law will also effectively create a “right to explanation,” whereby a user can ask for an explanation of an algorithmic decision that was made about them. We argue that while this law will pose large challenges for industry, it highlights opportunities for computer scientists to take the lead in designing algorithms and evaluation frameworks which avoid discrimination and enable explanation.

Conclusion: While the GDPR presents a number of problems for current applications in machine learning they are, we believe, good problems to have. The challenges described in this paper emphasize the importance of work that ensures that algorithms are not merely efficient, but transparent and fair. Research is underway in pursuit of rendering algorithms more amenable to ex post and ex ante inspection [11, 31, 20]. Furthermore, a number of recent studies have attempted to tackle the issue of discrimination within algorithms by introducing tools to both identify [5, 29] and rectify [9, 16, 32, 6, 12, 14] cases of unwanted bias. It remains to be seen whether these techniques are adopted in practice. One silver lining of this research is to show that, for certain types of algorithmic profiling, it is possible to both identify and implement interventions to correct for discrimination. This is in contrast to cases where discrimination arises from human judgment. The role of extraneous and ethically inappropriate factors in human decision making is well documented (e.g., [30, 10, 1]), and discriminatory decision making is pervasive in many of the sectors where algorithmic profiling might be introduced (e.g. [19, 7]). We believe that, properly applied, algorithms can not only make more accurate predictions, but offer increased transparency and fairness over their human counterparts (cf. [23]). Above all else, the GDPR is a vital acknowledgement that, when algorithms are deployed in society, few if any decisions are purely “technical”. Rather, the ethical design of algorithms requires coordination between technical and philosophical resources of the highest caliber. A start has been made, but there is far to go. And, with less than two years until the GDPR takes effect, the clock is ticking.

European Union regulations on algorithmic decision-making and a “right to explanation”

 

Algorithm over Regulations (?)

O real perigo da privacidade não é a mineração de dados das grandes corporações ou a vigilância governamental, mas sim ambos

Entre os posts que saem na grande mídia, provavelmente essa é a opinião mais fundamentada e que tem uma maior visão sobre a questão da privacidade e a mineração de dados, na qual faz a relação entre o que as grandes empresas sabem sobre nós, a vigilância governamental; e como essas organizações estão intrinsecamente ligadas e porque isso é uma ameaça a privacidade como um todo.

O artigo inicia com uma declaração na qual para quem realiza mineração de dados pode ser trivial, mas para pessoas comuns chega a ser assustador em termos de como corporações tem total conhecimento dos nossos dados pessoais:

It is said that a Visa executive – as in Visa, the credit card system – can predict your divorce one year ahead of yourself, based on your buying habits. There’s a recent telling anecdote where Target, the chain of stores, knew that a teenage woman was pregnant before her parents knew. If our purchase habits give away our life and privacy to this degree – imagine what Google or Facebook would be able to predict, if they wanted to?

Sob o aspecto governamental, como já foi postado anteriormente sobre a TIA (Total Information Awareness) na qual após uma parceria entre a Google e a CIA (Uma típica parceria publico privada) esse programa foi estranhamente colocado na geladeira pelo governo americano.

Esse quote mostra bem sobre o que o governo é capaz de fazer com as suas informações:

So let’s instead jump to what governments can do. Many enough countries now have blanket wiretapping laws in place that let them wiretap all their own citizens’ net traffic, all other citizens’ traffic, or both. (This would have been absolutely unthinkable just a decade ago.) Additionally, the security services generally share raw data between them – so just because you’re not tapped in your home country, that doesn’t mean your local security service doesn’t have a copy of everything you’ve ever typed or sent online; it can be tapped anywhere.

Governments are not only able to knock down your door when you behave in a way they don’t approve of. They even like doing exactly that, and see it as their job. This is something of a problem, and quite a severe one.

 

Dentro dessa abordagem que o autor propõe, cabe ressaltar que em alguns anos haverá a necessidade de uma regulação a respeito da aquisição/controle/comercialização das informações pelas empresas bem como maiores controles por parte do setor governamental. A discussão é boa e o artigo coloca um interessante ponto de vista. Vale a leitura.

O real perigo da privacidade não é a mineração de dados das grandes corporações ou a vigilância governamental, mas sim ambos

Viés e Tortura de Dados

Esse post do Flavio Comim no Lies, Big Lies, and Statistics mostra que o viés do fator humano   é algo fascinante; ele mostra uma questão bem interessante: Como o governo define uma clásse média, sem considerar o custo de vida médio?

A resposta é: Torture os dados, e eleve a média desconsiderando a variável mais importante do estudo.

Viés e Tortura de Dados