Mostrar el registro sencillo del ítem

dc.creatorGalar Idoate, Mikeles_ES
dc.creatorSesma Redín, Rubénes_ES
dc.creatorAyala Lauroba, Christianes_ES
dc.creatorAlbizua, Lourdeses_ES
dc.creatorAranda, Carloses_ES
dc.date.accessioned2021-01-20T09:41:13Z
dc.date.available2021-01-20T09:41:13Z
dc.date.issued2020
dc.identifier.issn2072-4292 (Electronic)
dc.identifier.urihttps://hdl.handle.net/2454/39028
dc.description.abstractEarth observation data is becoming more accessible and affordable thanks to the Copernicus programme and its Sentinel missions. Every location worldwide can be freely monitored approximately every 5 days using the multi-spectral images provided by Sentinel-2. The spatial resolution of these images for RGBN (RGB + Near-infrared) bands is 10 m, which is more than enough for many tasks but falls short for many others. For this reason, if their spatial resolution could be enhanced without additional costs, any posterior analyses based on these images would be benefited. Previous works have mainly focused on increasing the resolution of lower resolution bands of Sentinel-2 (20 m and 60 m) to 10 m resolution. In these cases, super-resolution is supported by bands captured at finer resolutions (RGBN at 10 m). On the contrary, this paper focuses on the problem of increasing the spatial resolution of 10 m bands to either 5 m or 2.5 m resolutions, without having additional information available. This problem is known as single-image super-resolution. For standard images, deep learning techniques have become the de facto standard to learn the mapping from lower to higher resolution images due to their learning capacity. However, super-resolution models learned for standard images do not work well with satellite images and hence, a specific model for this problem needs to be learned. The main challenge that this paper aims to solve is how to train a super-resolution model for Sentinel-2 images when no ground truth exists (Sentinel-2 images at 5 m or 2.5 m). Our proposal consists of using a reference satellite with a high similarity in terms of spectral bands with respect to Sentinel-2, but with higher spatial resolution, to create image pairs at both the source and target resolutions. This way, we can train a state-of-the-art Convolutional Neural Network to recover details not present in the original RGBN bands. An exhaustive experimental study is carried out to validate our proposal, including a comparison with the most extended strategy for super-resolving Sentinel-2, which consists in learning a model to super-resolve from an under-sampled version at either 40 m or 20 m to the original 10 m resolution and then, applying this model to super-resolve from 10 m to 5 m or 2.5 m. Finally, we will also show that the spectral radiometry of the native bands is maintained when super-resolving images, in such a way that they can be used for any subsequent processing as if they were images acquired by Sentinel-2.en
dc.description.sponsorshipM.G. was partially supported by Tracasa Instrumental S.L. under projects OTRI 2018-901-073, OTRI 2019-901-091 and OTRI 2020-901-050. C.A. (Christian Ayala) was partially supported by the Goverment of Navarra under the industrial PhD program 2020 reference 0011-1408-2020-000008.en
dc.description.sponsorshipM.G. was partially supported by Tracasa Instrumental S.L. under projects OTRI 2018-901-073, OTRI 2019-901-091 and OTRI 2020-901-050. C.A. (Christian Ayala) was partially supported by the Goverment of Navarra under the industrial PhD program 2020 reference 0011-1408-2020-000008.en
dc.format.extent37 p.
dc.format.mimetypeapplication/pdfen
dc.language.isoengen
dc.publisherMDPIen
dc.relation.ispartofRemote Sensing, 2020, 12(18), 2941en
dc.rights© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.en
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subjectSentinel-2en
dc.subjectSuper-resolutionen
dc.subjectDeep learningen
dc.subjectConvolutional neural networksen
dc.subjectMulti-spectral imageen
dc.titleSuper-resolution of Sentinel-2 images using convolutional neural networks and real ground truth dataen
dc.typeinfo:eu-repo/semantics/articleen
dc.typeArtículo / Artikuluaes
dc.contributor.departmentInstitute of Smart Cities - ISCes_ES
dc.rights.accessRightsinfo:eu-repo/semantics/openAccessen
dc.rights.accessRightsAcceso abierto / Sarbide irekiaes
dc.identifier.doi10.3390/RS12182941
dc.relation.publisherversionhttps://doi.org/10.3390/RS12182941
dc.type.versioninfo:eu-repo/semantics/publishedVersionen
dc.type.versionVersión publicada / Argitaratu den bertsioaes
dc.contributor.funderGobierno de Navarra / Nafarroako Gobernua, 0011-1408-2020-000008.es


Ficheros en el ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
La licencia del ítem se describe como © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.

El Repositorio ha recibido la ayuda de la Fundación Española para la Ciencia y la Tecnología para la realización de actividades en el ámbito del fomento de la investigación científica de excelencia, en la Línea 2. Repositorios institucionales (convocatoria 2020-2021).
Logo MinisterioLogo Fecyt