Jurío Munárriz, Aránzazu

Loading...
Profile Picture

Email Address

Birth Date

Job Title

Last Name

Jurío Munárriz

First Name

Aránzazu

person.page.departamento

Estadística, Informática y Matemáticas

person.page.instituteName

ISC. Institute of Smart Cities

person.page.observainves

person.page.upna

Name

Search Results

Now showing 1 - 2 of 2
  • PublicationOpen Access
    Less can be more: representational vs. stereotypical gender bias in facial expression recognition
    (Springer, 2024-10-14) Domínguez Catena, Iris; Paternain Dallo, Daniel; Jurío Munárriz, Aránzazu; Galar Idoate, Mikel; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika; Universidad Publica de Navarra / Nafarroako Unibertsitate Publikoa
    Machine learning models can inherit biases from their training data, leading to discriminatory or inaccurate predictions. This is particularly concerning with the increasing use of large, unsupervised datasets for training foundational models. Traditionally, demographic biases within these datasets have not been well-understood, limiting our ability to understand how they propagate to the models themselves. To address this issue, this paper investigates the propagation of demographic biases from datasets into machine learning models. We focus on the gender demographic component, analyzing two types of bias: representational and stereotypical. For our analysis, we consider the domain of facial expression recognition (FER), a field known to exhibit biases in most popular datasets. We use Affectnet, one of the largest FER datasets, as our baseline for carefully designing and generating subsets that incorporate varying strengths of both representational and stereotypical bias. Subsequently, we train several models on these biased subsets, evaluating their performance on a common test set to assess the propagation of bias into the models¿ predictions. Our results show that representational bias has a weaker impact than expected. Models exhibit a good generalization ability even in the absence of one gender in the training dataset. Conversely, stereotypical bias has a significantly stronger impact, primarily concentrated on the biased class, although it can also influence predictions for unbiased classes. These results highlight the need for a bias analysis that differentiates between types of bias, which is crucial for the development of effective bias mitigation strategies.
  • PublicationOpen Access
    A comparative study of CO2 forecasting strategies in school classrooms: a step toward improving indoor air quality
    (MDPI, 2025-03-09) Garcia-Pinilla, Peio; Jurío Munárriz, Aránzazu; Paternain Dallo, Daniel; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika; Institute of Smart Cities - ISC; Gobierno de Navarra / Nafarroako Gobernua
    This paper comprehensively investigates the performance of various strategies for predicting CO2 levels in school classrooms over different time horizons by using data collected through IoT devices. We gathered Indoor Air Quality (IAQ) data from fifteen schools in Navarra, Spain between 10 January and 3 April 2022, with measurements taken at 10-min intervals. Three prediction strategies divided into seven models were trained on the data and compared using statistical tests. The study confirms that simple methodologies are effective for short-term predictions, while Machine Learning (ML)-based models perform better over longer prediction horizons. Furthermore, this study demonstrates the feasibility of using low-cost devices combined with ML models for forecasting, which can help to improve IAQ in sensitive environments such as schools.