Galar Idoate, MikelSesma Redín, RubénAyala Lauroba, ChristianAranda, Carlos2020-05-192020-05-1920191682-175010.5194/isprs-archives-XLII-2-W16-95-2019https://academica-e.unavarra.es/handle/2454/36919Trabajo presentado al PIA19+MRSS19 – Photogrammetric Image Analysis & Munich Remote Sensing Symposium, 2019, MunichIncluye pósterObtaining Sentinel-2 imagery of higher spatial resolution than the native bands while ensuring that output imagery preserves the original radiometry has become a key issue since the deployment of Sentinel-2 satellites. Several studies have been carried out on the upsampling of 20m and 60m Sentinel-2 bands to 10 meters resolution taking advantage of 10m bands. However, how to super-resolve 10m bands to higher resolutions is still an open problem. Recently, deep learning-based techniques has become a de facto standard for single-image super-resolution. The problem is that neural network learning for super-resolution requires image pairs at both the original resolution (10m in Sentinel-2) and the target resolution (e.g., 5m or 2.5m). Since there is no way to obtain higher resolution images for Sentinel-2, we propose to consider images from others sensors having the greatest similarity in terms of spectral bands, which will be appropriately pre-processed. These images, together with Sentinel-2 images, will form our training set. We carry out several experiments using state-of-the-art Convolutional Neural Networks for single-image super-resolution showing that this methodology is a first step toward greater spatial resolution of Sentinel-2 images.8 p.application/pdfeng© Authors 2019. Creative Commons Attribution 4.0 International (CC BY 4.0)Super-resolutionDeep learningSentinel-2Image enhancementConvolutional neural networkOptical imagesSuper-resolution for Sentinel-2 imagesinfo:eu-repo/semantics/conferenceObjectinfo:eu-repo/semantics/openAccess