Domínguez Catena, Iris

Loading...
Profile Picture

Email Address

Birth Date

Job Title

Last Name

Domínguez Catena

First Name

Iris

person.page.departamento

Estadística, Informática y Matemáticas

person.page.instituteName

ISC. Institute of Smart Cities

person.page.observainves

person.page.upna

Name

Search Results

Now showing 1 - 4 of 4
  • PublicationOpen Access
    Diseño y captura de una base de datos para el reconocimiento de emociones minimizando sesgos
    (CAEPIA, 2024) Jurío Munárriz, Aránzazu; Pascual Casas, Rubén; Domínguez Catena, Iris; Paternain Dallo, Daniel; Galar Idoate, Mikel; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika; Institute of Smart Cities - ISC; Universidad Pública de Navarra / Nafarroako Unibertistate Publikoa; Gobierno de Navarra / Nafarroako Gobernua
    El reconocimiento de emociones a partir de expresiones faciales (FER) es un campo de investigación importante para la interacción persona-máquina. Sin embargo, los conjuntos de datos utilizados para entrenar modelos FER a menudo contienen sesgos demográficos que pueden conducir a la discriminación en el modelo final. En este trabajo, presentamos el diseño y la captura realizados para la creación de una nueva base de datos para FER, donde tratamos de minimizar los sesgos desde el propio diseño. La base de datos se ha creado utilizando diferentes métodos de captura. Para comprobar la reducción de los sesgos alcanzada, analizamos diferentes métricas de sesgo representacional y estereotípico sobre la base de datos generada y la comparamos frente a otras bases de datos estándar en la literatura de FER.
  • PublicationOpen Access
    Additional feature layers from ordered aggregations for deep neural networks
    (IEEE, 2020) Domínguez Catena, Iris; Paternain Dallo, Daniel; Galar Idoate, Mikel; Institute of Smart Cities - ISC; Universidad Pública de Navarra / Nafarroako Unibertsitate Publikoa
    In the last years we have seen huge advancements in the area of Machine Learning, specially with the use of Deep Neural Networks. One of the most relevant examples is in image classification, where convolutional neural networks have shown to be a vital tool, hard to replace with any other techniques. Although aggregation functions, such as OWA operators, have been previously used on top of neural networks, usually to aggregate the outputs of different networks or systems (ensembles), in this paper we propose and explore a new way of using OWA aggregations in deep learning. We implement OWA aggregations as a new layer inside a convolutional neural network. These layers are used to learn additional order-based information from the feature maps of a certain layer, and then the newly generated information is used as a complement input for the following layers. We carry out several tests introducing the new layer in a VGG13-based reference network and show that this layer introduces new knowledge into the network without substantially increasing training times.
  • PublicationOpen Access
    Learning channel-wise ordered aggregations in deep neural networks
    (Springer, 2021) Domínguez Catena, Iris; Paternain Dallo, Daniel; Galar Idoate, Mikel; Institute of Smart Cities - ISC; Universidad Pública de Navarra / Nafarroako Unibertsitate Publikoa
    One of the most common techniques for approaching image classification problems are Deep Neural Networks. These systems are capable of classifying images with different levels of detail at different levels of detail, with an accuracy that sometimes can surpass even manual classification by humans. Most common architectures for Deep Neural Networks are based on convolutional layers, which perform at the same time a convolution on each input channel and a linear aggregation on the convoluted channels. In this work, we develop a new method for augmenting the information of a layer inside a Deep Neural Network using channel-wise ordered aggregations. We develop a new layer that can be placed at different points inside a Deep Neural Network. This layer takes the feature maps of the previous layer and adds new feature maps by applying several channel-wise ordered aggregations based on learned weighting vectors. We perform several experiments introducing this layer in a VGG neural network and study the impact of the new layer, obtaining better accuracy scores over a sample dataset based on ImageNet. We also study the convergence and evolution of the weighting vectors of the new layers over the learning process, which gives a better understanding of the way the system is exploiting the additional information to gain new knowledge.
  • PublicationOpen Access
    Gender stereotyping impact in facial expression recognition
    (Springer, 2023) Domínguez Catena, Iris; Paternain Dallo, Daniel; Galar Idoate, Mikel; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika; Institute of Smart Cities - ISC; Universidad Pública de Navarra / Nafarroako Unibertsitate Publikoa
    Facial Expression Recognition (FER) uses images of faces to identify the emotional state of users, allowing for a closer interaction between humans and autonomous systems. Unfortunately, as the images naturally integrate some demographic information, such as apparent age, gender, and race of the subject, these systems are prone to demographic bias issues. In recent years, machine learning-based models have become the most popular approach to FER. These models require training on large datasets of facial expression images, and their generalization capabilities are strongly related to the characteristics of the dataset. In publicly available FER datasets, apparent gender representation is usually mostly balanced, but their representation in the individual label is not, embedding social stereotypes into the datasets and generating a potential for harm. Although this type of bias has been overlooked so far, it is important to understand the impact it may have in the context of FER. To do so, we use a popular FER dataset, FER+, to generate derivative datasets with different amounts of stereotypical bias by altering the gender proportions of certain labels. We then proceed to measure the discrepancy between the performance of the models trained on these datasets for the apparent gender groups. We observe a discrepancy in the recognition of certain emotions between genders of up to 29 % under the worst bias conditions. Our results also suggest a safety range for stereotypical bias in a dataset that does not appear to produce stereotypical bias in the resulting model. Our findings support the need for a thorough bias analysis of public datasets in problems like FER, where a global balance of demographic representation can still hide other types of bias that harm certain demographic groups.