Mostrar el registro sencillo del ítem

dc.creatorGalar Idoate, Mikeles_ES
dc.creatorSesma Redín, Rubénes_ES
dc.creatorAyala Lauroba, Christianes_ES
dc.creatorAlbizua, Lourdeses_ES
dc.creatorAranda, Carloses_ES
dc.date.accessioned2021-02-02T09:41:08Z
dc.date.available2021-02-02T09:41:08Z
dc.date.issued2020
dc.identifier.issn2194-9050 (Electronic)
dc.identifier.urihttps://hdl.handle.net/2454/39122
dc.description.abstractCopernicus program via its Sentinel missions is making earth observation more accessible and affordable for everybody. Sentinel-2 images provide multi-spectral information every 5 days for each location. However, the maximum spatial resolution of its bands is 10m for RGB and near-infrared bands. Increasing the spatial resolution of Sentinel-2 images without additional costs, would make any posterior analysis more accurate. Most approaches on super-resolution for Sentinel-2 have focused on obtaining 10m resolution images for those at lower resolutions (20m and 60m), taking advantage of the information provided by bands of finer resolutions (10m). Otherwise, our focus is on increasing the resolution of the 10m bands, that is, super-resolving 10m bands to 2.5m resolution, where no additional information is available. This problem is known as single-image super-resolution and deep learning-based approaches have become the state-of-the-art for this problem on standard images. Obviously, models learned for standard images do not translate well to satellite images. Hence, the problem is how to train a deep learning model for super-resolving Sentinel-2 images when no ground truth exist (Sentinel-2 images at 2.5m). We propose a methodology for learning Convolutional Neural Networks for Sentinel-2 image super-resolution making use of images from other sensors having a high similarity with Sentinel-2 in terms of spectral bands, but greater spatial resolution. Our proposal is tested with a state-of-the-art neural network showing that it can be useful for learning to increase the spatial resolution of RGB and near-infrared bands of Sentinel-2.en
dc.description.sponsorshipThis work was partially supported by the Public University of Navarre under project PJUPNA13 and Tracasa Instrumental S.L. (OTRI 2018 901 073 and OTRI 2019 901 091).en
dc.format.extent8 p.
dc.format.mimetypeapplication/pdfen
dc.language.isoengen
dc.publisherCopernicusen
dc.relation.ispartofISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, V-1-2020, 9-16en
dc.rights© Authors 2020. CC BY 4.0 License.en
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subjectSuper-resolutionen
dc.subjectDeep learningen
dc.subjectSentinel-2en
dc.subjectConvolutional neural networksen
dc.subjectMulti-spectral imagesen
dc.titleLearning super-resolution for Sentinel-2 images with real ground truth data from a reference satelliteen
dc.typeinfo:eu-repo/semantics/conferenceObjecten
dc.typeContribución a congreso / Biltzarrerako ekarpenaes
dc.contributor.departmentInstitute of Smart Cities - ISCes_ES
dc.rights.accessRightsinfo:eu-repo/semantics/openAccessen
dc.rights.accessRightsAcceso abierto / Sarbide irekiaes
dc.identifier.doi10.5194/isprs-annals-V-1-2020-9-2020
dc.relation.publisherversionhttps://doi.org/10.5194/isprs-annals-V-1-2020-9-2020
dc.type.versioninfo:eu-repo/semantics/publishedVersionen
dc.type.versionVersión publicada / Argitaratu den bertsioaes
dc.contributor.funderUniversidad Pública de Navarra / Nafarroako Unibertsitate Publikoaes


Ficheros en el ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

© Authors 2020. CC BY 4.0 License.
La licencia del ítem se describe como © Authors 2020. CC BY 4.0 License.

El Repositorio ha recibido la ayuda de la Fundación Española para la Ciencia y la Tecnología para la realización de actividades en el ámbito del fomento de la investigación científica de excelencia, en la Línea 2. Repositorios institucionales (convocatoria 2020-2021).
Logo MinisterioLogo Fecyt