Candidatures MCF

Hybrid approaches for assessing and improving algorithmic fairness in classification

par Guilherme Alves da Silva

Europe/Paris
S3 351 (Sciences 3)

S3 351

Sciences 3

Description

Algorithmic decisions are being used on a daily basis, and based on Machine Learning (ML) processes that may be complex and biased.

This raises several concerns given the critical impact that biased decisions may have on individuals or on society as a whole.

Not only unfair outcomes affect human rights, but they also undermine public trust in ML and AI.

In this talk, we will address the fairness-accuracy trade-off since we are interested in reducing unintended biases without compromising classification performance.

We thus propose ensemble-based methods to find a good compromise between fairness and classification performance of ML models, in particular models for binary classification.

To illustrate we will present a general workflow that relies on explainers to tackle process fairness, which essentially measures a model’s reliance on sensitive or discriminatory features.

We will present different applications and empirical settings that show improvements not only with respect to process fairness but also other group fairness notions.