Gender stereotyping impact in facial expression recognition
Fecha
2023Versión
Acceso abierto / Sarbide irekia
Tipo
Contribución a congreso / Biltzarrerako ekarpena
Versión
Versión aceptada / Onetsi den bertsioa
Identificador del proyecto
Impacto
|
10.1007/978-3-031-23618-1_1
Resumen
Facial Expression Recognition (FER) uses images of faces to identify the emotional state of users, allowing for a closer interaction between humans and autonomous systems. Unfortunately, as the images naturally integrate some demographic information, such as apparent age, gender, and race of the subject, these systems are prone to demographic bias issues. In recent years, machine learning-based m ...
[++]
Facial Expression Recognition (FER) uses images of faces to identify the emotional state of users, allowing for a closer interaction between humans and autonomous systems. Unfortunately, as the images naturally integrate some demographic information, such as apparent age, gender, and race of the subject, these systems are prone to demographic bias issues. In recent years, machine learning-based models have become the most popular approach to FER. These models require training on large datasets of facial expression images, and their generalization capabilities are strongly related to the characteristics of the dataset. In publicly available FER datasets, apparent gender representation is usually mostly balanced, but their representation in the individual label is not, embedding social stereotypes into the datasets and generating a potential for harm. Although this type of bias has been overlooked so far, it is important to understand the impact it may have in the context of FER. To do so, we use a popular FER dataset, FER+, to generate derivative datasets with different amounts of stereotypical bias by altering the gender proportions of certain labels. We then proceed to measure the discrepancy between the performance of the models trained on these datasets for the apparent gender groups. We observe a discrepancy in the recognition of certain emotions between genders of up to 29 % under the worst bias conditions. Our results also suggest a safety range for stereotypical bias in a dataset that does not appear to produce stereotypical bias in the resulting model. Our findings support the need for a thorough bias analysis of public datasets in problems like FER, where a global balance of demographic representation can still hide other types of bias that harm certain demographic groups. [--]
Materias
Facial expression recognition,
Gender stereotyping,
Stereotypical bias
Editor
Springer
Publicado en
Koprinska, I.; Mignone, P.; Guidotti, R.; Jaroszewicz, S.; Fröning, H.; Gullo, F.; Ferreira, P. M.; Roqueiro, D.; Ceddia, G.; Nowaczyk, S.; Gama, J.; Ribeiro, R.; Gavaldà, R.; Masciari, E.; Ras, Z.; Ritacco, E.; Naretto, F.; Theissler, A.; Biecek, P.; Pashami, S. (Eds.). ECML PKDD 2022: Machine Learning and Principles and Practice of Knowledge Discovery in Databases. Cham: Springer; 2023. p.9-22 978-3-031-23617-4
Departamento
Universidad Pública de Navarra. Departamento de Estadística, Informática y Matemáticas /
Nafarroako Unibertsitate Publikoa. Estatistika, Informatika eta Matematika Saila /
Universidad Pública de Navarra/Nafarroako Unibertsitate Publikoa. Institute of Smart Cities - ISC
Versión del editor
Entidades Financiadoras
This work was funded by a predoctoral fellowship of the Research Service of Universidad Publica de Navarra, the Spanish MICIN (PID2019-108392GB-I00 and
PID2020-118014RB-I00 / AEI / 10.13039/501100011033), and the Government
of Navarre (0011-1411-2020-000079 - Emotional Films).