Domínguez Catena, Iris

Loading...
Profile Picture

Email Address

Birth Date

Job Title

Last Name

Domínguez Catena

First Name

Iris

person.page.departamento

Estadística, Informática y Matemáticas

person.page.instituteName

ISC. Institute of Smart Cities

person.page.observainves

person.page.upna

Name

Search Results

Now showing 1 - 9 of 9
  • PublicationOpen Access
    A study of OWA operators learned in convolutional neural networks
    (MDPI, 2021) Domínguez Catena, Iris; Paternain Dallo, Daniel; Galar Idoate, Mikel; Institute of Smart Cities - ISC; Universidad Pública de Navarra / Nafarroako Unibertsitate Publikoa
    Ordered Weighted Averaging (OWA) operators have been integrated in Convolutional Neural Networks (CNNs) for image classification through the OWA layer. This layer lets the CNN integrate global information about the image in the early stages, where most CNN architectures only allow for the exploitation of local information. As a side effect of this integration, the OWA layer becomes a practical method for the determination of OWA operator weights, which is usually a difficult task that complicates the integration of these operators in other fields. In this paper, we explore the weights learned for the OWA operators inside the OWA layer, characterizing them through their basic properties of orness and dispersion. We also compare them to some families of OWA operators, namely the Binomial OWA operator, the Stancu OWA operator and the expo-nential RIM OWA operator, finding examples that are currently impossible to generalize through these parameterizations.
  • PublicationOpen Access
    Metrics for dataset demographic bias: a case study on facial expression recognition
    (IEEE, 2024) Domínguez Catena, Iris; Paternain Dallo, Daniel; Galar Idoate, Mikel; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika; Institute of Smart Cities - ISC; Universidad Pública de Navarra - Nafarroako Unibertsitate Publikoa
    Demographic biases in source datasets have been shown as one of the causes of unfairness and discrimination in the predictions of Machine Learning models. One of the most prominent types of demographic bias are statistical imbalances in the representation of demographic groups in the datasets. In this paper, we study the measurement of these biases by reviewing the existing metrics, including those that can be borrowed from other disciplines. We develop a taxonomy for the classification of these metrics, providing a practical guide for the selection of appropriate metrics. To illustrate the utility of our framework, and to further understand the practical characteristics of the metrics, we conduct a case study of 20 datasets used in Facial Emotion Recognition (FER), analyzing the biases present in them. Our experimental results show that many metrics are redundant and that a reduced subset of metrics may be sufficient to measure the amount of demographic bias. The paper provides valuable insights for researchers in AI and related fields to mitigate dataset bias and improve the fairness and accuracy of AI models.
  • PublicationOpen Access
    Additional feature layers from ordered aggregations for deep neural networks
    (IEEE, 2020) Domínguez Catena, Iris; Paternain Dallo, Daniel; Galar Idoate, Mikel; Institute of Smart Cities - ISC; Universidad Pública de Navarra / Nafarroako Unibertsitate Publikoa
    In the last years we have seen huge advancements in the area of Machine Learning, specially with the use of Deep Neural Networks. One of the most relevant examples is in image classification, where convolutional neural networks have shown to be a vital tool, hard to replace with any other techniques. Although aggregation functions, such as OWA operators, have been previously used on top of neural networks, usually to aggregate the outputs of different networks or systems (ensembles), in this paper we propose and explore a new way of using OWA aggregations in deep learning. We implement OWA aggregations as a new layer inside a convolutional neural network. These layers are used to learn additional order-based information from the feature maps of a certain layer, and then the newly generated information is used as a complement input for the following layers. We carry out several tests introducing the new layer in a VGG13-based reference network and show that this layer introduces new knowledge into the network without substantially increasing training times.
  • PublicationOpen Access
    Diseño y captura de una base de datos para el reconocimiento de emociones minimizando sesgos
    (CAEPIA, 2024) Jurío Munárriz, Aránzazu; Pascual Casas, Rubén; Domínguez Catena, Iris; Paternain Dallo, Daniel; Galar Idoate, Mikel; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika; Institute of Smart Cities - ISC; Universidad Pública de Navarra / Nafarroako Unibertistate Publikoa; Gobierno de Navarra / Nafarroako Gobernua
    El reconocimiento de emociones a partir de expresiones faciales (FER) es un campo de investigación importante para la interacción persona-máquina. Sin embargo, los conjuntos de datos utilizados para entrenar modelos FER a menudo contienen sesgos demográficos que pueden conducir a la discriminación en el modelo final. En este trabajo, presentamos el diseño y la captura realizados para la creación de una nueva base de datos para FER, donde tratamos de minimizar los sesgos desde el propio diseño. La base de datos se ha creado utilizando diferentes métodos de captura. Para comprobar la reducción de los sesgos alcanzada, analizamos diferentes métricas de sesgo representacional y estereotípico sobre la base de datos generada y la comparamos frente a otras bases de datos estándar en la literatura de FER.
  • PublicationOpen Access
    Less can be more: representational vs. stereotypical gender bias in facial expression recognition
    (Springer, 2024-10-14) Domínguez Catena, Iris; Paternain Dallo, Daniel; Jurío Munárriz, Aránzazu; Galar Idoate, Mikel; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika; Universidad Publica de Navarra / Nafarroako Unibertsitate Publikoa
    Machine learning models can inherit biases from their training data, leading to discriminatory or inaccurate predictions. This is particularly concerning with the increasing use of large, unsupervised datasets for training foundational models. Traditionally, demographic biases within these datasets have not been well-understood, limiting our ability to understand how they propagate to the models themselves. To address this issue, this paper investigates the propagation of demographic biases from datasets into machine learning models. We focus on the gender demographic component, analyzing two types of bias: representational and stereotypical. For our analysis, we consider the domain of facial expression recognition (FER), a field known to exhibit biases in most popular datasets. We use Affectnet, one of the largest FER datasets, as our baseline for carefully designing and generating subsets that incorporate varying strengths of both representational and stereotypical bias. Subsequently, we train several models on these biased subsets, evaluating their performance on a common test set to assess the propagation of bias into the models¿ predictions. Our results show that representational bias has a weaker impact than expected. Models exhibit a good generalization ability even in the absence of one gender in the training dataset. Conversely, stereotypical bias has a significantly stronger impact, primarily concentrated on the biased class, although it can also influence predictions for unbiased classes. These results highlight the need for a bias analysis that differentiates between types of bias, which is crucial for the development of effective bias mitigation strategies.
  • PublicationOpen Access
    DSAP: analyzing bias through demographic comparison of datasets
    (Elsevier, 2024-10-29) Domínguez Catena, Iris; Paternain Dallo, Daniel; Galar Idoate, Mikel; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika; Universidad Publica de Navarra / Nafarroako Unibertsitate Publikoa ; Gobierno de Navarra / Nafarroako Gobernua
    In the last few years, Artificial Intelligence (AI) systems have become increasingly widespread. Unfortunately, these systems can share many biases with human decision-making, including demographic biases. Often, these biases can be traced back to the data used for training, where large uncurated datasets have become the norm. Despite our awareness of these biases, we still lack general tools to detect, quantify, and compare them across different datasets. In this work, we propose DSAP (Demographic Similarity from Auxiliary Profiles), a two-step methodology for comparing the demographic composition of datasets. First, DSAP uses existing demographic estimation models to extract a dataset's demographic profile. Second, it applies a similarity metric to compare the demographic profiles of different datasets. While these individual components are well-known, their joint use for demographic dataset comparison is novel and has not been previously addressed in the literature. This approach allows three key applications: the identification of demographic blind spots and bias issues across datasets, the measurement of demographic bias, and the assessment of demographic shifts over time. DSAP can be used on datasets with or without explicit demographic information, provided that demographic information can be derived from the samples using auxiliary models, such as those for image or voice datasets. To show the usefulness of the proposed methodology, we consider the Facial Expression Recognition task, where demographic bias has previously been found. The three applications are studied over a set of twenty datasets with varying properties. The code is available at https://github.com/irisdominguez/DSAP.
  • PublicationOpen Access
    Learning channel-wise ordered aggregations in deep neural networks
    (Springer, 2021) Domínguez Catena, Iris; Paternain Dallo, Daniel; Galar Idoate, Mikel; Institute of Smart Cities - ISC; Universidad Pública de Navarra / Nafarroako Unibertsitate Publikoa
    One of the most common techniques for approaching image classification problems are Deep Neural Networks. These systems are capable of classifying images with different levels of detail at different levels of detail, with an accuracy that sometimes can surpass even manual classification by humans. Most common architectures for Deep Neural Networks are based on convolutional layers, which perform at the same time a convolution on each input channel and a linear aggregation on the convoluted channels. In this work, we develop a new method for augmenting the information of a layer inside a Deep Neural Network using channel-wise ordered aggregations. We develop a new layer that can be placed at different points inside a Deep Neural Network. This layer takes the feature maps of the previous layer and adds new feature maps by applying several channel-wise ordered aggregations based on learned weighting vectors. We perform several experiments introducing this layer in a VGG neural network and study the impact of the new layer, obtaining better accuracy scores over a sample dataset based on ImageNet. We also study the convergence and evolution of the weighting vectors of the new layers over the learning process, which gives a better understanding of the way the system is exploiting the additional information to gain new knowledge.
  • PublicationOpen Access
    Unsupervised fuzzy measure learning for classifier ensembles from coalitions performance
    (IEEE, 2020) Uriz Martín, Mikel Xabier; Paternain Dallo, Daniel; Domínguez Catena, Iris; Bustince Sola, Humberto; Galar Idoate, Mikel; Institute of Smart Cities - ISC; Universidad Pública de Navarra / Nafarroako Unibertsitate Publikoa, PJUPNA13
    In Machine Learning an ensemble refers to the combination of several classifiers with the objective of improving the performance of every one of its counterparts. To design an ensemble two main aspects must be considered: how to create a diverse set of classifiers and how to combine their outputs. This work focuses on the latter task. More specifically, we focus on the usage of aggregation functions based on fuzzy measures, such as the Sugeno and Choquet integrals, since they allow to model the coalitions and interactions among the members of the ensemble. In this scenario the challenge is how to construct a fuzzy measure that models the relations among the members of the ensemble. We focus on unsupervised methods for fuzzy measure construction, review existing alternatives and categorize them depending on their features. Furthermore, we intend to address the weaknesses of previous alternatives by proposing a new construction method that obtains the fuzzy measure directly evaluating the performance of each possible subset of classifiers, which can be efficiently computed. To test the usefulness of the proposed fuzzy measure, we focus on the application of ensembles for imbalanced datasets. We consider a set of 66 imbalanced datasets and develop a complete experimental study comparing the reviewed methods and our proposal.
  • PublicationOpen Access
    Gender stereotyping impact in facial expression recognition
    (Springer, 2023) Domínguez Catena, Iris; Paternain Dallo, Daniel; Galar Idoate, Mikel; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika; Institute of Smart Cities - ISC; Universidad Pública de Navarra / Nafarroako Unibertsitate Publikoa
    Facial Expression Recognition (FER) uses images of faces to identify the emotional state of users, allowing for a closer interaction between humans and autonomous systems. Unfortunately, as the images naturally integrate some demographic information, such as apparent age, gender, and race of the subject, these systems are prone to demographic bias issues. In recent years, machine learning-based models have become the most popular approach to FER. These models require training on large datasets of facial expression images, and their generalization capabilities are strongly related to the characteristics of the dataset. In publicly available FER datasets, apparent gender representation is usually mostly balanced, but their representation in the individual label is not, embedding social stereotypes into the datasets and generating a potential for harm. Although this type of bias has been overlooked so far, it is important to understand the impact it may have in the context of FER. To do so, we use a popular FER dataset, FER+, to generate derivative datasets with different amounts of stereotypical bias by altering the gender proportions of certain labels. We then proceed to measure the discrepancy between the performance of the models trained on these datasets for the apparent gender groups. We observe a discrepancy in the recognition of certain emotions between genders of up to 29 % under the worst bias conditions. Our results also suggest a safety range for stereotypical bias in a dataset that does not appear to produce stereotypical bias in the resulting model. Our findings support the need for a thorough bias analysis of public datasets in problems like FER, where a global balance of demographic representation can still hide other types of bias that harm certain demographic groups.