Less can be more: representational vs. stereotypical gender bias in facial expression recognition
dc.contributor.author | Domínguez Catena, Iris | |
dc.contributor.author | Paternain Dallo, Daniel | |
dc.contributor.author | Jurío Munárriz, Aránzazu | |
dc.contributor.author | Galar Idoate, Mikel | |
dc.contributor.department | Estadística, Informática y Matemáticas | es_ES |
dc.contributor.department | Estatistika, Informatika eta Matematika | eu |
dc.contributor.funder | Universidad Publica de Navarra / Nafarroako Unibertsitate Publikoa | |
dc.date.accessioned | 2025-03-03T13:14:06Z | |
dc.date.available | 2025-03-03T13:14:06Z | |
dc.date.issued | 2024-10-14 | |
dc.date.updated | 2025-03-03T13:06:53Z | |
dc.description.abstract | Machine learning models can inherit biases from their training data, leading to discriminatory or inaccurate predictions. This is particularly concerning with the increasing use of large, unsupervised datasets for training foundational models. Traditionally, demographic biases within these datasets have not been well-understood, limiting our ability to understand how they propagate to the models themselves. To address this issue, this paper investigates the propagation of demographic biases from datasets into machine learning models. We focus on the gender demographic component, analyzing two types of bias: representational and stereotypical. For our analysis, we consider the domain of facial expression recognition (FER), a field known to exhibit biases in most popular datasets. We use Affectnet, one of the largest FER datasets, as our baseline for carefully designing and generating subsets that incorporate varying strengths of both representational and stereotypical bias. Subsequently, we train several models on these biased subsets, evaluating their performance on a common test set to assess the propagation of bias into the models¿ predictions. Our results show that representational bias has a weaker impact than expected. Models exhibit a good generalization ability even in the absence of one gender in the training dataset. Conversely, stereotypical bias has a significantly stronger impact, primarily concentrated on the biased class, although it can also influence predictions for unbiased classes. These results highlight the need for a bias analysis that differentiates between types of bias, which is crucial for the development of effective bias mitigation strategies. | en |
dc.description.sponsorship | This work was funded by a predoctoral fellowship and open access funding from the Research Service of the Universidad Publica de Navarra, the Spanish MICIN (PID2020-118014RB-I00 and PID2022-136627NB-I00, AEI/10.13039/501100011033 FEDER, UE), the Government of Navarre (0011-1411-2020-000079 - Emotional Films), and the support of the 2024 Leonardo Grant for Researchers and Cultural Creators from the BBVA Foundation. | |
dc.format.mimetype | application/pdf | en |
dc.identifier.citation | Dominguez-Catena, I., Paternain, D., Jurio, A., Galar, M. (2024) Less can be more: representational vs. stereotypical gender bias in facial expression recognition. Progress in Artificial Intelligence, 14(2025), 11-13 https://doi.org/10.1007/s13748-024-00345-w | |
dc.identifier.doi | 10.1007/s13748-024-00345-w | |
dc.identifier.issn | 2192-6352 | |
dc.identifier.uri | https://academica-e.unavarra.es/handle/2454/53648 | |
dc.language.iso | eng | |
dc.publisher | Springer | |
dc.relation.ispartof | Progress in Artificial Intelligence, 14(2025), 11-13 | |
dc.relation.projectID | info:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020/PID2020-118014RB-I00/ES/ | |
dc.relation.projectID | info:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2021-2023/PID2022-136627NB-I00/ES/ | |
dc.relation.projectID | info:eu-repo/grantAgreement/Gobierno de Navarra//0011-1411-2020-000079/ | |
dc.relation.publisherversion | https://doi.org/10.1007/s13748-024-00345-w | |
dc.rights | This article is licensed under a Creative Commons Attribution 4.0 International License. | |
dc.rights.accessRights | info:eu-repo/semantics/openAccess | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | Algorithmic fairness | en |
dc.subject | Demographic bias | en |
dc.subject | Facial expression recognition | en |
dc.subject | Gender bias | en |
dc.subject | Machine learning | en |
dc.title | Less can be more: representational vs. stereotypical gender bias in facial expression recognition | en |
dc.type | info:eu-repo/semantics/article | |
dc.type.version | info:eu-repo/semantics/publishedVersion | |
dspace.entity.type | Publication | |
relation.isAuthorOfPublication | 2ba22a83-13df-4d11-8e44-f9e97dbc13d0 | |
relation.isAuthorOfPublication | ca16c024-51e4-4f8f-b457-dc5307be32d9 | |
relation.isAuthorOfPublication | 73222e0e-83db-4a05-ac1e-e5caff9fda6b | |
relation.isAuthorOfPublication | 44c7a308-9c21-49ef-aa03-b45c2c5a06fd | |
relation.isAuthorOfPublication.latestForDiscovery | 2ba22a83-13df-4d11-8e44-f9e97dbc13d0 |