Big data and machine learning algorithms are the buzzwords of the decade in the digital world. The idea of these technologies is to make the data collected profitable thanks to the capacity of automatic decision tools, which raises as much hope as apprehension.
Artificial intelligence is becoming more and more present in our daily lives, and naturally triggers the debate on its proper development and use. Its significant impact on the development of humanity in the near future is less debatable. It is common to find publications that are interested in its application, which touches more and more fields as diverse as varied.
As an example, medical professionals can rely on AI to detect drug side effects, supply chain operators to optimize their delivery routes and social scientists to study human interactions.
One of the most widespread and industrialized fields of application for the last ten years is decision support. In practice, a large majority of domains are concerned: recommendations for online sales, assessment of the risk of breach of contract, fraud, system failure and recurrence, or even medical diagnostic assistance.
The only downside, when the data are used for predictive purposes to assist in decision making. In some applications they may affect the fate of classes of people in a systematically unfavorable way.
In other words, sorting and selecting the best or most profitable candidates means generating a model with winners and losers. If data scientists are not made aware of this, the process can lead to disproportionately negative results concentrated among historically disadvantaged groups.
In this case, careless data mining leads us to inherit the biases of previous decision-makers or simply to reflect the widespread prejudices that persist in society. This leads us to reproduce and even accentuate existing patterns of social discrimination.
To summarize, an algorithmic decision based on biased data is at best as unbiased as a human decision.
Reasonably, the literature proposes approaches to not only identify and assess a risk of discrimination, but also to mitigate or even correct these biases.
En l’occurrence, ce n’est pas utopique d’imaginer une IA qui combine à la fois : éthique et respect du RGPD.
Khalil Loukili, Data Scientist Inferensia