Domínguez Catena, Iris
Loading...
Email Address
person.page.identifierURI
Birth Date
Job Title
Last Name
Domínguez Catena
First Name
Iris
person.page.departamento
Estadística, Informática y Matemáticas
person.page.instituteName
ISC. Institute of Smart Cities
ORCID
person.page.observainves
person.page.upna
Name
- Publications
- item.page.relationships.isAdvisorOfPublication
- item.page.relationships.isAdvisorTFEOfPublication
- item.page.relationships.isAuthorMDOfPublication
2 results
Search Results
Now showing 1 - 2 of 2
Publication Open Access Less can be more: representational vs. stereotypical gender bias in facial expression recognition(Springer, 2024-10-14) Domínguez Catena, Iris; Paternain Dallo, Daniel; Jurío Munárriz, Aránzazu; Galar Idoate, Mikel; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika; Universidad Publica de Navarra / Nafarroako Unibertsitate PublikoaMachine learning models can inherit biases from their training data, leading to discriminatory or inaccurate predictions. This is particularly concerning with the increasing use of large, unsupervised datasets for training foundational models. Traditionally, demographic biases within these datasets have not been well-understood, limiting our ability to understand how they propagate to the models themselves. To address this issue, this paper investigates the propagation of demographic biases from datasets into machine learning models. We focus on the gender demographic component, analyzing two types of bias: representational and stereotypical. For our analysis, we consider the domain of facial expression recognition (FER), a field known to exhibit biases in most popular datasets. We use Affectnet, one of the largest FER datasets, as our baseline for carefully designing and generating subsets that incorporate varying strengths of both representational and stereotypical bias. Subsequently, we train several models on these biased subsets, evaluating their performance on a common test set to assess the propagation of bias into the models¿ predictions. Our results show that representational bias has a weaker impact than expected. Models exhibit a good generalization ability even in the absence of one gender in the training dataset. Conversely, stereotypical bias has a significantly stronger impact, primarily concentrated on the biased class, although it can also influence predictions for unbiased classes. These results highlight the need for a bias analysis that differentiates between types of bias, which is crucial for the development of effective bias mitigation strategies.Publication Open Access Demographic bias in machine learning: measuring transference from dataset bias to model predictions(2024) Domínguez Catena, Iris; Galar Idoate, Mikel; Paternain Dallo, Daniel; Estadística, Informática y Matemáticas; Estatistika, Informatika eta MatematikaAs artificial intelligence (AI) systems increasingly influence critical decisions in society, ensuring fairness and avoiding bias have become pressing challenges. This dissertation investigates demographic bias in machine learning, with a particular focus on measuring how bias transfers from datasets to model predictions. Using Facial Expression Recognition (FER) as a primary case study, we develop novel metrics and methodologies to quantify and analyze bias at both the dataset and model levels. The thesis makes several key contributions to the field of algorithmic fairness. We propose a comprehensive taxonomy of types of dataset bias and metrics available for each type. Through extensive evaluation on FER datasets, we demonstrate the effectiveness and limitations of these metrics in capturing different aspects of demographic bias. Additionally, we introduce DSAP (Demographic Similarity from Auxiliary Profiles), a novel method for comparing datasets based on their demographic properties. DSAP enables interpretable bias measurement and analysis of demographic shifts between datasets, providing valuable insights for dataset curation and model development. Our research includes in-depth experiments examining the propagation of representational and stereotypical biases from datasets to FER models. Our findings reveal that while representational bias tends to be mitigated during model training, stereotypical bias is more likely to persist in model predictions. Furthermore, we present a framework for measuring bias transference from datasets to models across various bias induction scenarios. This analysis uncovers complex relationships between dataset bias and resulting model bias, highlighting the need for nuanced approaches to bias mitigation. Throughout the dissertation, we emphasize the importance of considering both representational and stereotypical biases in AI systems. Our work demonstrates that these biases can manifest and propagate differently, necessitating tailored strategies for detection and mitigation. By providing robust methodologies for quantifying and analyzing demographic bias, this research contributes to the broader goal of developing fairer and more equitable AI systems. The insights and tools presented here have implications beyond FER, offering valuable approaches for addressing bias in various machine learning applications. This dissertation paves the way for future work in algorithmic fairness, emphasizing the need for continued research into bias measurement, mitigation strategies, and the development of more inclusive AI technologies.