Person:
Ayala Lauroba, Christian

Loading...
Profile Picture

Email Address

Birth Date

Research Projects

Organizational Units

Job Title

Last Name

Ayala Lauroba

First Name

Christian

person.page.departamento

Estadística, Informática y Matemáticas

person.page.instituteName

ORCID

0000-0002-5229-9636

person.page.upna

812816

Name

Search Results

Now showing 1 - 10 of 11
  • PublicationEmbargo
    A Deep Learning approach to land use classification in high resolution satellite imagery
    (2020) Ayala Lauroba, Christian; Galar Idoate, Mikel; Escuela Técnica Superior de Ingeniería Industrial, Informática y de Telecomunicación; Industria, Informatika eta Telekomunikazio Ingeniaritzako Goi Mailako Eskola Teknikoa
    A lo largo de los últimos años ha aumentado el interés y la necesidad de disponer de información de usos y coberturas del territorio fiable y actualizada, siendo numerosos los proyectos de carácter local, nacional e internacional cuyo objetivo es la creación y actualización de bases de datos de usos y ocupación del suelo. En los últimos años se han producido importantes avances tecnológicos en el sector de la teledetección y el tratamiento de imágenes de satélite. En Europa, se ha impulsado la investigación en el ámbito de la observación de la Tierra gracias al programa Copernicus gestionado por la Agencia Espacial Europea (ESA). Este proyecto se encuentra focalizado en la puesta a punto de una metodología para el seguimiento del grado de consolidación en las áreas de suelo en desarrollo urbano de las ciudades. Para tales fines se ha optado por segmentar semánticamente imágenes satelitales del programa Copernicus mediante la aplicación de innovadoras técnicas de Deep Learning. Los resultados obtenidos han sido comparados a los obtenidos mediante un proceso semiautomático, realizado por profesionales de teledetección.
  • PublicationOpen Access
    Pushing the limits of Sentinel-2 for building footprint extraction
    (IEEE, 2022) Ayala Lauroba, Christian; Aranda, Carlos; Galar Idoate, Mikel; Institute of Smart Cities - ISC
    Building footprint maps are of high importance nowadays since a wide range of services relies on them to work. However, activities to keep these maps up-to-date are costly and time-consuming due to the great deal of human intervention required. Several automation attempts have been carried out in the last decade aiming at fully automatizing them. However, taking into account the complexity of the task and the current limitations of semantic segmentation deep learning models, the vast majority of approaches rely on aerial imagery (<1 m). As a result, prohibitive costs and high revisit times prevent the remote sensing community from maintaining up-to-date building maps. This work proposes a novel deep learning architecture to accurately extract building footprints from high resolution satellite imagery (10 m). Accordingly, super-resolution and semantic segmentation techniques have been fused to make it possible not only to improve the building's boundary definition but also to detect buildings with sub-pixel width. As a result, fine-grained building maps at 2.5 m are generated using Sentinel-2 imagery, closing the gap between satellite and aerial semantic segmentation.
  • PublicationOpen Access
    A deep learning approach to an enhanced building footprint and road detection in high-resolution satellite imagery
    (MDPI, 2021) Ayala Lauroba, Christian; Sesma Redín, Rubén; Aranda, Carlos; Galar Idoate, Mikel; Institute of Smart Cities - ISC; Gobierno de Navarra / Nafarroako Gobernua
    The detection of building footprints and road networks has many useful applications including the monitoring of urban development, real-time navigation, etc. Taking into account that a great deal of human attention is required by these remote sensing tasks, a lot of effort has been made to automate them. However, the vast majority of the approaches rely on very high-resolution satellite imagery (<2.5 m) whose costs are not yet affordable for maintaining up-to-date maps. Working with the limited spatial resolution provided by high-resolution satellite imagery such as Sentinel-1 and Sentinel-2 (10 m) makes it hard to detect buildings and roads, since these labels may coexist within the same pixel. This paper focuses on this problem and presents a novel methodology capable of detecting building and roads with sub-pixel width by increasing the resolution of the output masks. This methodology consists of fusing Sentinel-1 and Sentinel-2 data (at 10 m) together with OpenStreetMap to train deep learning models for building and road detection at 2.5 m. This becomes possible thanks to the usage of OpenStreetMap vector data, which can be rasterized to any desired resolution. Accordingly, a few simple yet effective modifications of the U-Net architecture are proposed to not only semantically segment the input image, but also to learn how to enhance the resolution of the output masks. As a result, generated mappings quadruplicate the input spatial resolution, closing the gap between satellite and aerial imagery for building and road detection. To properly evaluate the generalization capabilities of the proposed methodology, a data-set composed of 44 cities across the Spanish territory have been considered and divided into training and testing cities. Both quantitative and qualitative results show that high-resolution satellite imagery can be used for sub-pixel width building and road detection following the proper methodology.
  • PublicationOpen Access
    Towards fine-grained road maps extraction using sentinel-2 imagery
    (Copernicus, 2021) Ayala Lauroba, Christian; Aranda, Carlos; Galar Idoate, Mikel; Institute of Smart Cities - ISC; Gobierno de Navarra / Nafarroako Gobernua
    Nowadays, it is highly important to keep road maps up-to-date since a great deal of services rely on them. However, to date, these labours have demanded a great deal of human attention due to their complexity. In the last decade, promising attempts have been carried out to fully-automatize the extraction of road networks from remote sensing imagery. Nevertheless, the vast majority of methods rely on aerial imagery (< 1 m), whose costs are not yet affordable for maintaining up-to-date maps. This work proves that it is also possible to accurately detect roads using high resolution satellite imagery (10 m). Accordingly, we have relied on Sentinel-2 imagery considering its freely availability and the higher revisit times compared to aerial imagery. It must be taken into account that the lack of spatial resolution of this sensor drastically increases the difficulty of the road detection task, since the feasibility to detect a road depends on its width, which can reach sub-pixel size in Sentinel-2 imagery. For that purpose, a new deep learning architecture which combines semantic segmentation and super-resolution techniques is proposed. As a result, fine-grained road maps at 2.5 m are generated from Sentinel-2 imagery.
  • PublicationOpen Access
    Diffusion models for remote sensing imagery semantic segmentation
    (IEEE, 2023-10-20) Ayala Lauroba, Christian; Sesma Redín, Rubén; Aranda, Carlos; Galar Idoate, Mikel; Institute of Smart Cities - ISC; Universidad Pública de Navarra / Nafarroako Unibertsitate Publikoa, PJUPNA25-2022
    Denoising Diffusion Probabilistic Models have exhibited impressive performance for generative modelling of images. This paper aims to explore the potential of diffusion models for semantic segmentation tasks in the context of remote sensing. The major challenge of employing these models for semantic segmentation tasks is the generative nature of the model, which produces an arbitrary segmentation mask from a random noise input. Therefore, the diffusion process needs to be constrained to produce a segmentation mask that matches the target image. To address this issue, the denoising process is conditioned by utilizing the input image as a reference. In the experimental study, the proposed model is compared against other state-of-the-art semantic segmentation architectures using the Massachusetts Buildings Aerial dataset. The results of this study provide valuable insights into the potential of diffusion models for semantic segmentation tasks in the field of remote sensing.
  • PublicationOpen Access
    Super-resolution of Sentinel-2 images using convolutional neural networks and real ground truth data
    (MDPI, 2020) Galar Idoate, Mikel; Sesma Redín, Rubén; Ayala Lauroba, Christian; Albizua, Lourdes; Aranda, Carlos; Institute of Smart Cities - ISC; Gobierno de Navarra / Nafarroako Gobernua, 0011-1408-2020-000008.
    Earth observation data is becoming more accessible and affordable thanks to the Copernicus programme and its Sentinel missions. Every location worldwide can be freely monitored approximately every 5 days using the multi-spectral images provided by Sentinel-2. The spatial resolution of these images for RGBN (RGB + Near-infrared) bands is 10 m, which is more than enough for many tasks but falls short for many others. For this reason, if their spatial resolution could be enhanced without additional costs, any posterior analyses based on these images would be benefited. Previous works have mainly focused on increasing the resolution of lower resolution bands of Sentinel-2 (20 m and 60 m) to 10 m resolution. In these cases, super-resolution is supported by bands captured at finer resolutions (RGBN at 10 m). On the contrary, this paper focuses on the problem of increasing the spatial resolution of 10 m bands to either 5 m or 2.5 m resolutions, without having additional information available. This problem is known as single-image super-resolution. For standard images, deep learning techniques have become the de facto standard to learn the mapping from lower to higher resolution images due to their learning capacity. However, super-resolution models learned for standard images do not work well with satellite images and hence, a specific model for this problem needs to be learned. The main challenge that this paper aims to solve is how to train a super-resolution model for Sentinel-2 images when no ground truth exists (Sentinel-2 images at 5 m or 2.5 m). Our proposal consists of using a reference satellite with a high similarity in terms of spectral bands with respect to Sentinel-2, but with higher spatial resolution, to create image pairs at both the source and target resolutions. This way, we can train a state-of-the-art Convolutional Neural Network to recover details not present in the original RGBN bands. An exhaustive experimental study is carried out to validate our proposal, including a comparison with the most extended strategy for super-resolving Sentinel-2, which consists in learning a model to super-resolve from an under-sampled version at either 40 m or 20 m to the original 10 m resolution and then, applying this model to super-resolve from 10 m to 5 m or 2.5 m. Finally, we will also show that the spectral radiometry of the native bands is maintained when super-resolving images, in such a way that they can be used for any subsequent processing as if they were images acquired by Sentinel-2.
  • PublicationOpen Access
    Guidelines to compare semantic segmentation maps at different resolutions
    (IEEE, 2024) Ayala Lauroba, Christian; Aranda, Carlos; Galar Idoate, Mikel; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika; Institute of Smart Cities - ISC; Universidad Pública de Navarra / Nafarroako Unibertsitate Publikoa
    Choosing the proper ground sampling distance (GSD) is a vital decision in remote sensing, which can determine the success or failure of a project. Higher resolutions may be more suitable for accurately detecting objects, but they also come with higher costs and require more computing power. Semantic segmentation is a common task in remote sensing where GSD plays a crucial role. In semantic segmentation, each pixel of an image is classified into a predefined set of classes, resulting in a semantic segmentation map. However, comparing the results of semantic segmentation at different GSDs is not straightforward. Unlike scene classification and object detection tasks, which are evaluated at scene and object level, respectively, semantic segmentation is typically evaluated at pixel level. This makes it difficult to match elements across different GSDs, resulting in a range of methods for computing metrics, some of which may not be adequate. For this reason, the purpose of this work is to set out a clear set of guidelines for fairly comparing semantic segmentation results obtained at various spatial resolutions. Additionally, we propose to complement the commonly used scene-based pixel-wise metrics with region-based pixel-wise metrics, allowing for a more detailed analysis of the model performance. The set of guidelines together with the proposed region-based metrics are illustrated with building and swimming pool detection problems. The experimental study demonstrates that by following the proposed guidelines and the proposed region-based pixel-wise metrics, it is possible to fairly compare segmentation maps at different spatial resolutions and gain a better understanding of the model's performance. To promote the usage of these guidelines and ease the computation of the new region-based metrics, we create the seg-eval Python library and make it publicly available at https://github.com/itracasa/ seg-eval.
  • PublicationOpen Access
    Multi-temporal data augmentation for high frequency satellite imagery: a case study in Sentinel-1 and Sentinel-2 building and road segmentation
    (ISPRS, 2022) Ayala Lauroba, Christian; Aranda Magallón, Coral; Galar Idoate, Mikel; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika; Institute of Smart Cities - ISC
    Semantic segmentation of remote sensing images has many practical applications such as urban planning or disaster assessment. Deep learning-based approaches have shown their usefulness in automatically segmenting large remote sensing images, helping to automatize these tasks. However, deep learning models require large amounts of labeled data to generalize well to unseen scenarios. The generation of global-scale remote sensing datasets with high intraclass variability presents a major challenge. For this reason, data augmentation techniques have been widely applied to artificially increase the size of the datasets. Among them, photometric data augmentation techniques such as random brightness, contrast, saturation, and hue have been traditionally applied aiming at improving the generalization against color spectrum variations, but they can have a negative effect on the model due to their synthetic nature. To solve this issue, sensors with high revisit times such as Sentinel-1 and Sentinel-2 can be exploited to realistically augment the dataset. Accordingly, this paper sets out a novel realistic multi-temporal color data augmentation technique. The proposed methodology has been evaluated in the building and road semantic segmentation tasks, considering a dataset composed of 38 Spanish cities. As a result, the experimental study shows the usefulness of the proposed multi-temporal data augmentation technique, which can be further improved with traditional photometric transformations.
  • PublicationOpen Access
    Enhancing semantic segmentation models for high-resolution satellite imagery
    (2024) Ayala Lauroba, Christian; Galar Idoate, Mikel; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika
    El objetivo principal de esta memoria es estudiar, desarrollar y evaluar de manera justa modelos de segmentación semántica que utilicen datos con una resolución espacial limitada para abordar tareas que demandan un alto nivel de detalle, como la segmentación semántica de edificios y carreteras, aprovechando la disponibilidad de datos públicos y de libre acceso. De esta forma se busca reducir la brecha con productos de muy alta resolución, que actualmente son inaccesibles para la gran mayoría de usuarios. La memoria está organizada en torno a cuatro grandes objetivos, cada uno correspondiente a un problema abierto expuesto en la sección anterior, y que en conjunto abordan el objetivo principal mencionado: Estudiar y desarrollar modelos de segmentación semántica para abordar problemas que requieren de un alto nivel de detalle, como la segmentación de edificios y carreteras, utilizando imágenes satelitales de baja resolución espacial. Dados los numerosos métodos existentes en la literatura para segmentar edificios y carreteras [ZLW18, BHF+19,W+20, SZH+21], nos planteamos realizar un estudio de estas técnicas con el objetivo de identificar sus limitaciones y los problemas que puedan surgir al aplicarlas a imágenes satelitales de baja resolución, como las ofrecidas por los satélites Sentinel. Basándonos en esta información, planeamos diseñar un nuevo modelo de aprendizaje profundo capaz de extraer edificios y carreteras a partir de imágenes de baja resolución, con una calidad cercana a la obtenida a partir de imágenes de muy alta resolución. Investigar cuál es la mejor estrategia para abordar problemas de segmentación semántica con múltiples clases. Habitualmente, los problemas de segmentación semántica multi-clase se afrontan de manera directa [YRH18]. Sin embargo, en un contexto donde la resolución espacial influye significativamente en la separación de las clases, parece lógico recurrir a estrategias de descomposición binaria [LdCG09] o esquemas de entrenamiento multi-tarea [Gir15]. Estos enfoques podrían simplificar el problema multi-clase y, por tanto, mejorar la clasificación. No obstante, en el ámbito de la observación de la Tierra, estas estrategias no han sido ampliamente utilizadas en problemas de segmentación semántica. Por ello, nos proponemos llevar a cabo un estudio comparativo de las diferentes estrategias para abordar problemas de segmentación semántica con varias clases en el ámbito de la teledetección, con el objetivo de establecer pautas claras al respecto. Diseñar nuevas técnicas de aumento de datos que exploten la alta periodicidad que ofrecen los satélites de baja resolución. Las técnicas de aumento de datos basadas en fotometría aplicadas al ámbito de la observación de la Tierra presentan limitaciones importantes, como la generación de artefactos sintéticos y la incapacidad de modelar eventos como la siembra, cosecha o la posición del sol [GSD+20]. Además, estas técnicas se encuentran estrechamente ligadas a sensores ópticos, por lo que no son extrapolables a otros tipos de sensores como térmicos o radar. Por ello, nos plantemos proponer formas de explotar la alta periodicidad provista por los satélites de baja resolución como técnica de aumento de datos, sin necesidad de etiquetar estos datos previamente. Establecer pautas para comparar mapas de segmentación semántica a múltiples resoluciones. Dado que en literatura no existe un consenso sobre cómo se deben comparar métricas obtenidas a partir de modelos de segmentación semántica con resoluciones espaciales diferentes, nos planteamos estudiar las limitaciones y ventajas de cada una de las aproximaciones existentes para poder plantear pautas claras que indiquen cuál es la forma correcta de llevar a cabo dicha comparación.
  • PublicationOpen Access
    Multi-class strategies for joint building footprint and road detection in remote sensing
    (MDPI, 2021) Ayala Lauroba, Christian; Aranda, Carlos; Galar Idoate, Mikel; Institute of Smart Cities - ISC; Gobierno de Navarra / Nafarroako Gobernua, 0011-1408-2020-000008
    Building footprints and road networks are important inputs for a great deal of services. For instance, building maps are useful for urban planning, whereas road maps are essential for disaster response services. Traditionally, building and road maps are manually generated by remote sensing experts or land surveying, occasionally assisted by semi-automatic tools. In the last decade, deep learning-based approaches have demonstrated their capabilities to extract these elements automatically and accurately from remote sensing imagery. The building footprint and road network detection problem can be considered a multi-class semantic segmentation task, that is, a single model performs a pixel-wise classification on multiple classes, optimizing the overall performance. However, depending on the spatial resolution of the imagery used, both classes may coexist within the same pixel, drastically reducing their separability. In this regard, binary decomposition techniques, which have been widely studied in the machine learning literature, are proved useful for addressing multiclass problems. Accordingly, the multi-class problem can be split into multiple binary semantic segmentation sub-problems, specializing different models for each class. Nevertheless, in these cases, an aggregation step is required to obtain the final output labels. Additionally, other novel approaches, such as multi-task learning, may come in handy to further increase the performance of the binary semantic segmentation models. Since there is no certainty as to which strategy should be carried out to accurately tackle a multi-class remote sensing semantic segmentation problem, this paper performs an in-depth study to shed light on the issue. For this purpose, open-access Sentinel-1 and Sentinel-2 imagery (at 10 m) are considered for extracting buildings and roads, making use of the well-known U-Net convolutional neural network. It is worth stressing that building and road classes may coexist within the same pixel when working at such a low spatial resolution, setting a challenging problem scheme. Accordingly, a robust experimental study is developed to assess the benefits of the decomposition strategies and their combination with a multi-task learning scheme. The obtained results demonstrate that decomposing the considered multi-class remote sensing semantic segmentation problem into multiple binary ones using a One-vs-All binary decomposition technique leads to better results than the standard direct multi-class approach. Additionally, the benefits of using a multi-task learning scheme for pushing the performance of binary segmentation models are also shown.