Galar Idoate, Mikel

Loading...
Profile Picture

Email Address

Birth Date

Job Title

Last Name

Galar Idoate

First Name

Mikel

person.page.departamento

Estadística, Informática y Matemáticas

person.page.instituteName

ISC. Institute of Smart Cities

person.page.observainves

person.page.upna

Name

Search Results

Now showing 1 - 10 of 56
  • PublicationOpen Access
    PhantomFields: fast time and spatial multiplexation of acoustic fields for generation of superresolution patterns
    (2021) Elizondo Martínez, Sonia; Goñi Carnicero, Jaime; Galar Idoate, Mikel; Marzo Pérez, Asier; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika
    Ultrasonic fields generated by phased arrays can be tailored to obtain a custom pattern of acoustic radiation forces. These force fields can pattern particles as well as be felt by the human hand, enabling applications for bioprinting and contactless haptic devices. The forcé fields can be switched orders of magnitude faster than the reaction time of the particles that it pushes or the human mechanoreceptors of touch. Therefore, a quick multiplexation in time or in space of different acoustic fields will be perceived as the average field. In this paper, we optimise the non-linear problem of decomposing a target force field into several multiplexed acoustic fields. We create averaged fields, PhantomFields, that cannot be created by a regular (unique) emission of an acoustic field. We improve accuracy by time multiplexation and spatial multiplexation, i.e. quick rotation of the emitters. These processes improve the resolution and strength of the obtained fields without the requirement of new hardware, opening up applications in haptic devices and 3D printing.
  • PublicationOpen Access
    Enhancing DreamBooth with LoRA for generating unlimited characters with stable diffusion
    (IEEE, 2024-09-09) Pascual Casas, Rubén; Maiza Coupin, Adrián Mikel; Sesma Sara, Mikel; Paternain Dallo, Daniel; Galar Idoate, Mikel; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika; Institute of Smart Cities - ISC; Universidad Pública de Navarra / Nafarroako Unibertsitate Publikoa, PJUPNA2023-11377
    This paper addresses the challenge of generating unlimited new and distinct characters that encompass the style and shared visual characteristics of a limited set of human designed characters. This is a relevant problem in the audiovisual industry, as the ability to rapidly produce original characters that adhere to specific characteristics greatly increases the possibilities in the production of movies, series, or video games. Our solution is built upon DreamBooth, a widely extended fine-tuning method for text-to-image models. We propose an adaptation focusing on two main challenges: the impracticality of relying on detailed image prompts for character description and the few-shot learning scenario with a limited set of characters available for training. To solve these issues, we introduce additional character-specific tokens to DreamBooth training and remove its class-specific regularization dataset. For an unlimited generation of characters, we propose the usage of random tokens and random embeddings. This proposal is tested on two specialized datasets and the results shows our method¿s capability to produce diverse characters that adhere to a style and visual characteristics. An ablation study to analyze the contributions of the proposed modifications is also developed.
  • PublicationOpen Access
    Guidelines to compare semantic segmentation maps at different resolutions
    (IEEE, 2024) Ayala Lauroba, Christian; Aranda, Carlos; Galar Idoate, Mikel; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika; Institute of Smart Cities - ISC; Universidad Pública de Navarra / Nafarroako Unibertsitate Publikoa
    Choosing the proper ground sampling distance (GSD) is a vital decision in remote sensing, which can determine the success or failure of a project. Higher resolutions may be more suitable for accurately detecting objects, but they also come with higher costs and require more computing power. Semantic segmentation is a common task in remote sensing where GSD plays a crucial role. In semantic segmentation, each pixel of an image is classified into a predefined set of classes, resulting in a semantic segmentation map. However, comparing the results of semantic segmentation at different GSDs is not straightforward. Unlike scene classification and object detection tasks, which are evaluated at scene and object level, respectively, semantic segmentation is typically evaluated at pixel level. This makes it difficult to match elements across different GSDs, resulting in a range of methods for computing metrics, some of which may not be adequate. For this reason, the purpose of this work is to set out a clear set of guidelines for fairly comparing semantic segmentation results obtained at various spatial resolutions. Additionally, we propose to complement the commonly used scene-based pixel-wise metrics with region-based pixel-wise metrics, allowing for a more detailed analysis of the model performance. The set of guidelines together with the proposed region-based metrics are illustrated with building and swimming pool detection problems. The experimental study demonstrates that by following the proposed guidelines and the proposed region-based pixel-wise metrics, it is possible to fairly compare segmentation maps at different spatial resolutions and gain a better understanding of the model's performance. To promote the usage of these guidelines and ease the computation of the new region-based metrics, we create the seg-eval Python library and make it publicly available at https://github.com/itracasa/ seg-eval.
  • PublicationOpen Access
    Less can be more: representational vs. stereotypical gender bias in facial expression recognition
    (Springer, 2024-10-14) Domínguez Catena, Iris; Paternain Dallo, Daniel; Jurío Munárriz, Aránzazu; Galar Idoate, Mikel; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika; Universidad Publica de Navarra / Nafarroako Unibertsitate Publikoa
    Machine learning models can inherit biases from their training data, leading to discriminatory or inaccurate predictions. This is particularly concerning with the increasing use of large, unsupervised datasets for training foundational models. Traditionally, demographic biases within these datasets have not been well-understood, limiting our ability to understand how they propagate to the models themselves. To address this issue, this paper investigates the propagation of demographic biases from datasets into machine learning models. We focus on the gender demographic component, analyzing two types of bias: representational and stereotypical. For our analysis, we consider the domain of facial expression recognition (FER), a field known to exhibit biases in most popular datasets. We use Affectnet, one of the largest FER datasets, as our baseline for carefully designing and generating subsets that incorporate varying strengths of both representational and stereotypical bias. Subsequently, we train several models on these biased subsets, evaluating their performance on a common test set to assess the propagation of bias into the models¿ predictions. Our results show that representational bias has a weaker impact than expected. Models exhibit a good generalization ability even in the absence of one gender in the training dataset. Conversely, stereotypical bias has a significantly stronger impact, primarily concentrated on the biased class, although it can also influence predictions for unbiased classes. These results highlight the need for a bias analysis that differentiates between types of bias, which is crucial for the development of effective bias mitigation strategies.
  • PublicationOpen Access
    Generación ilimitada de personajes mediante Stable Diffusion con DreamBooth y LoRA
    (CAEPIA, 2024) Pascual Casas, Rubén; Maiza Coupin, Adrián Mikel; Sesma Sara, Mikel; Paternain Dallo, Daniel; Galar Idoate, Mikel; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika; Institute of Smart Cities - ISC; Universidad Pública de Navarra / Nafarroako Unibertsitate Publikoa, PJUPNA2023-11377; Gobierno de Navarra / Nafarroako Gobernua
    Este artículo aborda el reto de generar un número ilimitado de personajes nuevos, y distintos, que engloben el estilo y las características visuales compartidas de un conjunto limitado de personajes diseñados por un humano. Este es un problema de gran relevancia en la industria audiovisual, ya que la capacidad de producir rápidamente personajes originales que se adhieran a unas características específicas aumenta enormemente las posibilidades en la producción de películas, series o videojuegos. Nuestra solución se basa en DreamBooth, un método de ajuste de modelos generativos de texto a imagen ampliamente extendido. Proponemos una adaptación centrada en dos retos principales: lo poco práctico que resulta utilizar prompts detallados de las imágenes para describir los personajes y la complejidad del ajuste de modelos a partir de un conjunto limitado de personajes. Para resolver estos problemas, introducimos en el entrenamiento de DreamBooth tokens adicionales específicos para cada personaje y eliminamos el conjunto de datos de regularización. Para generar personajes de manera ilimitada, proponemos el uso de tokens y embeddings aleatorios. Comprobamos la utilidad de la propuesta utilizando dos conjuntos de datos diferentes. Los resultados obtenidos muestran la capacidad de nuestro método para producir personajes diversos que se adhieren a un estilo y a unas características visuales concretas. Finalmente, desarrollamos un estudio de ablación.
  • PublicationOpen Access
    DSAP: analyzing bias through demographic comparison of datasets
    (Elsevier, 2024-10-29) Domínguez Catena, Iris; Paternain Dallo, Daniel; Galar Idoate, Mikel; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika; Universidad Publica de Navarra / Nafarroako Unibertsitate Publikoa ; Gobierno de Navarra / Nafarroako Gobernua
    In the last few years, Artificial Intelligence (AI) systems have become increasingly widespread. Unfortunately, these systems can share many biases with human decision-making, including demographic biases. Often, these biases can be traced back to the data used for training, where large uncurated datasets have become the norm. Despite our awareness of these biases, we still lack general tools to detect, quantify, and compare them across different datasets. In this work, we propose DSAP (Demographic Similarity from Auxiliary Profiles), a two-step methodology for comparing the demographic composition of datasets. First, DSAP uses existing demographic estimation models to extract a dataset's demographic profile. Second, it applies a similarity metric to compare the demographic profiles of different datasets. While these individual components are well-known, their joint use for demographic dataset comparison is novel and has not been previously addressed in the literature. This approach allows three key applications: the identification of demographic blind spots and bias issues across datasets, the measurement of demographic bias, and the assessment of demographic shifts over time. DSAP can be used on datasets with or without explicit demographic information, provided that demographic information can be derived from the samples using auxiliary models, such as those for image or voice datasets. To show the usefulness of the proposed methodology, we consider the Facial Expression Recognition task, where demographic bias has previously been found. The three applications are studied over a set of twenty datasets with varying properties. The code is available at https://github.com/irisdominguez/DSAP.
  • PublicationOpen Access
    Super-resolution of Sentinel-2 images using convolutional neural networks and real ground truth data
    (MDPI, 2020) Galar Idoate, Mikel; Sesma Redín, Rubén; Ayala Lauroba, Christian; Albizua, Lourdes; Aranda, Carlos; Institute of Smart Cities - ISC; Gobierno de Navarra / Nafarroako Gobernua, 0011-1408-2020-000008.
    Earth observation data is becoming more accessible and affordable thanks to the Copernicus programme and its Sentinel missions. Every location worldwide can be freely monitored approximately every 5 days using the multi-spectral images provided by Sentinel-2. The spatial resolution of these images for RGBN (RGB + Near-infrared) bands is 10 m, which is more than enough for many tasks but falls short for many others. For this reason, if their spatial resolution could be enhanced without additional costs, any posterior analyses based on these images would be benefited. Previous works have mainly focused on increasing the resolution of lower resolution bands of Sentinel-2 (20 m and 60 m) to 10 m resolution. In these cases, super-resolution is supported by bands captured at finer resolutions (RGBN at 10 m). On the contrary, this paper focuses on the problem of increasing the spatial resolution of 10 m bands to either 5 m or 2.5 m resolutions, without having additional information available. This problem is known as single-image super-resolution. For standard images, deep learning techniques have become the de facto standard to learn the mapping from lower to higher resolution images due to their learning capacity. However, super-resolution models learned for standard images do not work well with satellite images and hence, a specific model for this problem needs to be learned. The main challenge that this paper aims to solve is how to train a super-resolution model for Sentinel-2 images when no ground truth exists (Sentinel-2 images at 5 m or 2.5 m). Our proposal consists of using a reference satellite with a high similarity in terms of spectral bands with respect to Sentinel-2, but with higher spatial resolution, to create image pairs at both the source and target resolutions. This way, we can train a state-of-the-art Convolutional Neural Network to recover details not present in the original RGBN bands. An exhaustive experimental study is carried out to validate our proposal, including a comparison with the most extended strategy for super-resolving Sentinel-2, which consists in learning a model to super-resolve from an under-sampled version at either 40 m or 20 m to the original 10 m resolution and then, applying this model to super-resolve from 10 m to 5 m or 2.5 m. Finally, we will also show that the spectral radiometry of the native bands is maintained when super-resolving images, in such a way that they can be used for any subsequent processing as if they were images acquired by Sentinel-2.
  • PublicationOpen Access
    Multi-temporal data augmentation for high frequency satellite imagery: a case study in Sentinel-1 and Sentinel-2 building and road segmentation
    (ISPRS, 2022) Ayala Lauroba, Christian; Aranda Magallón, Coral; Galar Idoate, Mikel; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika; Institute of Smart Cities - ISC
    Semantic segmentation of remote sensing images has many practical applications such as urban planning or disaster assessment. Deep learning-based approaches have shown their usefulness in automatically segmenting large remote sensing images, helping to automatize these tasks. However, deep learning models require large amounts of labeled data to generalize well to unseen scenarios. The generation of global-scale remote sensing datasets with high intraclass variability presents a major challenge. For this reason, data augmentation techniques have been widely applied to artificially increase the size of the datasets. Among them, photometric data augmentation techniques such as random brightness, contrast, saturation, and hue have been traditionally applied aiming at improving the generalization against color spectrum variations, but they can have a negative effect on the model due to their synthetic nature. To solve this issue, sensors with high revisit times such as Sentinel-1 and Sentinel-2 can be exploited to realistically augment the dataset. Accordingly, this paper sets out a novel realistic multi-temporal color data augmentation technique. The proposed methodology has been evaluated in the building and road semantic segmentation tasks, considering a dataset composed of 38 Spanish cities. As a result, the experimental study shows the usefulness of the proposed multi-temporal data augmentation technique, which can be further improved with traditional photometric transformations.
  • PublicationOpen Access
    A framework for radial data comparison and its application to fingerprint analysis
    (Elsevier, 2016) Marco Detchart, Cedric; Cerrón González, Juan; Miguel Turullols, Laura de; López Molina, Carlos; Bustince Sola, Humberto; Galar Idoate, Mikel; Automatika eta Konputazioa; Institute of Smart Cities - ISC; Automática y Computación; Universidad Pública de Navarra / Nafarroako Unibertsitate Publikoa
    This work tackles the comparison of radial data, and proposes comparison measures that are further applied to fingerprint analysis. First, we study the similarity of scalar and non-scalar radial data, elaborated on previous works in fuzzy set theory. This study leads to the concepts of restricted radial equivalence function and Radial Similarity Measure, which model the perceived similarity between scalar and vectorial pieces of radial data, respectively. Second, the utility of these functions is tested in the context of fingerprint analysis, and more specifically, in the singular point detection. With this aim, a novel Template-based Singular Point Detection method is proposed, which takes advantage of these functions. Finally, their suitability is tested in different fingerprint databases. Different Similarity Measures are considered to show the flexibility offered by these measures and the behaviour of the new method is compared with well-known singular point detection methods.
  • PublicationOpen Access
    d-Choquet integrals: Choquet integrals based on dissimilarities
    (Elsevier, 2020) Bustince Sola, Humberto; Mesiar, Radko; Fernández Fernández, Francisco Javier; Galar Idoate, Mikel; Paternain Dallo, Daniel; Altalhi, A. H.; Pereira Dimuro, Graçaliz; Bedregal, Benjamin; Takáč, Zdenko; Estatistika, Informatika eta Matematika; Institute of Smart Cities - ISC; Estadística, Informática y Matemáticas; Universidad Pública de Navarra / Nafarroako Unibertsitate Publikoa, PJUPNA13
    The paper introduces a new class of functions from [0,1]n to [0,n] called d-Choquet integrals. These functions are a generalization of the 'standard' Choquet integral obtained by replacing the difference in the definition of the usual Choquet integral by a dissimilarity function. In particular, the class of all d-Choquet integrals encompasses the class of all 'standard' Choquet integrals but the use of dissimilarities provides higher flexibility and generality. We show that some d-Choquet integrals are aggregation/pre-aggregation/averaging/functions and some of them are not. The conditions under which this happens are stated and other properties of the d-Choquet integrals are studied.