Making data fair through optimal trimmed matching
Date
Authors
Director
Publisher
Project identifier
Impacto
Abstract
Algorithmic fairness is one of the main concerns of today's scientific society due to the generalization of predictive algorithms in all aspects of human life. The aim of this work is to check if there is group bias in the response variable Y with respect to a sensitive information S present in the data. However, not all individuals in S are comparable, and some differences in the target Y may arise from genuine differences in the data. We propose to eliminate such cases by trimming an proportion of the input data as a pre-processing step to any further learning mechanism in order to obtain the two closest possible marginal distributions (with respect to S). On this population that is ¿similar enough¿ we can check for discrimination, in the sense of Demographic Parity. We solve a trimmed matching problem subject to fairness constraints that is a linear program that can be addressed with well-known techniques. We present some successful results of application to synthetic and real data.
Description
Keywords
Department
Faculty/School
Degree
Doctorate program
item.page.cita
item.page.rights
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023
Los documentos de Academica-e están protegidos por derechos de autor con todos los derechos reservados, a no ser que se indique lo contrario.