Sesma Redín, Rubén

Loading...
Profile Picture

Email Address

Birth Date

Job Title

Last Name

Sesma Redín

First Name

Rubén

person.page.departamento

Estadística, Informática y Matemáticas

person.page.instituteName

person.page.observainves

person.page.upna

Name

Search Results

Now showing 1 - 3 of 3
  • PublicationOpen Access
    Super-resolution for Sentinel-2 images
    (International Society for Photogrammetry and Remote Sensing, 2019) Galar Idoate, Mikel; Sesma Redín, Rubén; Ayala Lauroba, Christian; Aranda, Carlos; Institute of Smart Cities - ISC
    Obtaining Sentinel-2 imagery of higher spatial resolution than the native bands while ensuring that output imagery preserves the original radiometry has become a key issue since the deployment of Sentinel-2 satellites. Several studies have been carried out on the upsampling of 20m and 60m Sentinel-2 bands to 10 meters resolution taking advantage of 10m bands. However, how to super-resolve 10m bands to higher resolutions is still an open problem. Recently, deep learning-based techniques has become a de facto standard for single-image super-resolution. The problem is that neural network learning for super-resolution requires image pairs at both the original resolution (10m in Sentinel-2) and the target resolution (e.g., 5m or 2.5m). Since there is no way to obtain higher resolution images for Sentinel-2, we propose to consider images from others sensors having the greatest similarity in terms of spectral bands, which will be appropriately pre-processed. These images, together with Sentinel-2 images, will form our training set. We carry out several experiments using state-of-the-art Convolutional Neural Networks for single-image super-resolution showing that this methodology is a first step toward greater spatial resolution of Sentinel-2 images.
  • PublicationOpen Access
    Diffusion models for remote sensing imagery semantic segmentation
    (IEEE, 2023-10-20) Ayala Lauroba, Christian; Sesma Redín, Rubén; Aranda, Carlos; Galar Idoate, Mikel; Institute of Smart Cities - ISC; Universidad Pública de Navarra / Nafarroako Unibertsitate Publikoa, PJUPNA25-2022
    Denoising Diffusion Probabilistic Models have exhibited impressive performance for generative modelling of images. This paper aims to explore the potential of diffusion models for semantic segmentation tasks in the context of remote sensing. The major challenge of employing these models for semantic segmentation tasks is the generative nature of the model, which produces an arbitrary segmentation mask from a random noise input. Therefore, the diffusion process needs to be constrained to produce a segmentation mask that matches the target image. To address this issue, the denoising process is conditioned by utilizing the input image as a reference. In the experimental study, the proposed model is compared against other state-of-the-art semantic segmentation architectures using the Massachusetts Buildings Aerial dataset. The results of this study provide valuable insights into the potential of diffusion models for semantic segmentation tasks in the field of remote sensing.
  • PublicationOpen Access
    Learning super-resolution for Sentinel-2 images with real ground truth data from a reference satellite
    (Copernicus, 2020) Galar Idoate, Mikel; Sesma Redín, Rubén; Ayala Lauroba, Christian; Albizua, Lourdes; Aranda, Carlos; Institute of Smart Cities - ISC; Universidad Pública de Navarra / Nafarroako Unibertsitate Publikoa
    Copernicus program via its Sentinel missions is making earth observation more accessible and affordable for everybody. Sentinel-2 images provide multi-spectral information every 5 days for each location. However, the maximum spatial resolution of its bands is 10m for RGB and near-infrared bands. Increasing the spatial resolution of Sentinel-2 images without additional costs, would make any posterior analysis more accurate. Most approaches on super-resolution for Sentinel-2 have focused on obtaining 10m resolution images for those at lower resolutions (20m and 60m), taking advantage of the information provided by bands of finer resolutions (10m). Otherwise, our focus is on increasing the resolution of the 10m bands, that is, super-resolving 10m bands to 2.5m resolution, where no additional information is available. This problem is known as single-image super-resolution and deep learning-based approaches have become the state-of-the-art for this problem on standard images. Obviously, models learned for standard images do not translate well to satellite images. Hence, the problem is how to train a deep learning model for super-resolving Sentinel-2 images when no ground truth exist (Sentinel-2 images at 2.5m). We propose a methodology for learning Convolutional Neural Networks for Sentinel-2 image super-resolution making use of images from other sensors having a high similarity with Sentinel-2 in terms of spectral bands, but greater spatial resolution. Our proposal is tested with a state-of-the-art neural network showing that it can be useful for learning to increase the spatial resolution of RGB and near-infrared bands of Sentinel-2.