Publication: Preserving the fairness guarantees of classifiers in changing environments: a survey
Date
Authors
Director
Publisher
Project identifier
Abstract
The impact of automated decision-making systems on human lives is growing, emphasizing the need for these systems to be not only accurate but also fair. The ield of algorithmic fairness has expanded signiicantly in the past decade, with most approaches assuming that training and testing data are drawn independently and identically from the same distribution. However, in practice, diferences between the training and deployment environments exist, compromising both the performance and fairness of the decision-making algorithms in real-world scenarios. A new area of research has emerged to address how to maintain fairness guarantees in classiication tasks when the data generation processes difer between the source (training) and target (testing) domains. The objective of this survey is to ofer a comprehensive examination of fair classiication under distribution shift by presenting a taxonomy of current approaches. The latter is formulated based on the available information from the target domain, distinguishing between adaptive methods, which adapt to the target environment based on available information, and robust methods, which make minimal assumptions about the target environment. Additionally, this study emphasizes alternative benchmarking methods, investigates the interconnection with related research ields, and identiies potential avenues for future research.
Description
Keywords
Department
Faculty/School
Degree
Doctorate program
item.page.cita
item.page.rights
© 2023 Copyright held by the owner/author(s).
Los documentos de Academica-e están protegidos por derechos de autor con todos los derechos reservados, a no ser que se indique lo contrario.