Less can be more: representational vs. stereotypical gender bias in facial expression recognition

dc.contributor.authorDomínguez Catena, Iris
dc.contributor.authorPaternain Dallo, Daniel
dc.contributor.authorJurío Munárriz, Aránzazu
dc.contributor.authorGalar Idoate, Mikel
dc.contributor.departmentEstadística, Informática y Matemáticases_ES
dc.contributor.departmentEstatistika, Informatika eta Matematikaeu
dc.contributor.funderUniversidad Publica de Navarra / Nafarroako Unibertsitate Publikoa
dc.date.accessioned2025-03-03T13:14:06Z
dc.date.available2025-03-03T13:14:06Z
dc.date.issued2024-10-14
dc.date.updated2025-03-03T13:06:53Z
dc.description.abstractMachine learning models can inherit biases from their training data, leading to discriminatory or inaccurate predictions. This is particularly concerning with the increasing use of large, unsupervised datasets for training foundational models. Traditionally, demographic biases within these datasets have not been well-understood, limiting our ability to understand how they propagate to the models themselves. To address this issue, this paper investigates the propagation of demographic biases from datasets into machine learning models. We focus on the gender demographic component, analyzing two types of bias: representational and stereotypical. For our analysis, we consider the domain of facial expression recognition (FER), a field known to exhibit biases in most popular datasets. We use Affectnet, one of the largest FER datasets, as our baseline for carefully designing and generating subsets that incorporate varying strengths of both representational and stereotypical bias. Subsequently, we train several models on these biased subsets, evaluating their performance on a common test set to assess the propagation of bias into the models¿ predictions. Our results show that representational bias has a weaker impact than expected. Models exhibit a good generalization ability even in the absence of one gender in the training dataset. Conversely, stereotypical bias has a significantly stronger impact, primarily concentrated on the biased class, although it can also influence predictions for unbiased classes. These results highlight the need for a bias analysis that differentiates between types of bias, which is crucial for the development of effective bias mitigation strategies.en
dc.description.sponsorshipThis work was funded by a predoctoral fellowship and open access funding from the Research Service of the Universidad Publica de Navarra, the Spanish MICIN (PID2020-118014RB-I00 and PID2022-136627NB-I00, AEI/10.13039/501100011033 FEDER, UE), the Government of Navarre (0011-1411-2020-000079 - Emotional Films), and the support of the 2024 Leonardo Grant for Researchers and Cultural Creators from the BBVA Foundation.
dc.format.mimetypeapplication/pdfen
dc.identifier.citationDominguez-Catena, I., Paternain, D., Jurio, A., Galar, M. (2024) Less can be more: representational vs. stereotypical gender bias in facial expression recognition. Progress in Artificial Intelligence, 14(2025), 11-13 https://doi.org/10.1007/s13748-024-00345-w
dc.identifier.doi10.1007/s13748-024-00345-w
dc.identifier.issn2192-6352
dc.identifier.urihttps://academica-e.unavarra.es/handle/2454/53648
dc.language.isoeng
dc.publisherSpringer
dc.relation.ispartofProgress in Artificial Intelligence, 14(2025), 11-13
dc.relation.projectIDinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020/PID2020-118014RB-I00/ES/
dc.relation.projectIDinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2021-2023/PID2022-136627NB-I00/ES/
dc.relation.projectIDinfo:eu-repo/grantAgreement/Gobierno de Navarra//0011-1411-2020-000079/
dc.relation.publisherversionhttps://doi.org/10.1007/s13748-024-00345-w
dc.rightsThis article is licensed under a Creative Commons Attribution 4.0 International License.
dc.rights.accessRightsinfo:eu-repo/semantics/openAccess
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectAlgorithmic fairnessen
dc.subjectDemographic biasen
dc.subjectFacial expression recognitionen
dc.subjectGender biasen
dc.subjectMachine learningen
dc.titleLess can be more: representational vs. stereotypical gender bias in facial expression recognitionen
dc.typeinfo:eu-repo/semantics/article
dc.type.versioninfo:eu-repo/semantics/publishedVersion
dspace.entity.typePublication
relation.isAuthorOfPublication2ba22a83-13df-4d11-8e44-f9e97dbc13d0
relation.isAuthorOfPublicationca16c024-51e4-4f8f-b457-dc5307be32d9
relation.isAuthorOfPublication73222e0e-83db-4a05-ac1e-e5caff9fda6b
relation.isAuthorOfPublication44c7a308-9c21-49ef-aa03-b45c2c5a06fd
relation.isAuthorOfPublication.latestForDiscovery2ba22a83-13df-4d11-8e44-f9e97dbc13d0

Files

Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Dominguez_LessCanBeMore.pdf
Size:
977.62 KB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed to upon submission
Description: