Pagola Barrio, Miguel

Loading...
Profile Picture

Email Address

Birth Date

Job Title

Last Name

Pagola Barrio

First Name

Miguel

person.page.departamento

Estadística, Informática y Matemáticas

person.page.instituteName

ISC. Institute of Smart Cities

person.page.observainves

person.page.upna

Name

Search Results

Now showing 1 - 2 of 2
  • PublicationOpen Access
    Co-occurrence of deep convolutional features for image search
    (Elsevier, 2020) Forcén Carvalho, Juan Ignacio; Pagola Barrio, Miguel; Barrenechea Tartas, Edurne; Bustince Sola, Humberto; Estatistika, Informatika eta Matematika; Institute of Smart Cities - ISC; Estadística, Informática y Matemáticas
    Image search can be tackled using deep features from pre-trained Convolutional Neural Networks (CNN). The feature map from the last convolutional layer of a CNN encodes descriptive information from which a discriminative global descriptor can be obtained. We propose a new representation of co-occurrences from deep convolutional features to extract additional relevant information from this last convolutional layer. Combining this co-occurrence map with the feature map, we achieve an improved image representation. We present two different methods to get the co-occurrence representation, the first one based on direct aggregation of activations, and the second one, based on a trainable co-occurrence representation. The image descriptors derived from our methodology improve the performance in very well-known image retrieval datasets as we prove in the experiments.
  • PublicationOpen Access
    Aggregation of deep features for image retrieval based on object detection
    (Springer, 2019-09-22) Forcén Carvalho, Juan Ignacio; Pagola Barrio, Miguel; Barrenechea Tartas, Edurne; Bustince Sola, Humberto; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika; Institute of Smart Cities - ISC; Universidad Pública de Navarra / Nafarroako Unibertsitate Publikoa
    Image retrieval can be tackled using deep features from pretrained Convolutional Neural Networks (CNN). The feature map from the last convolutional layer of a CNN encodes descriptive information from which a discriminative global descriptor can be obtained. However, this global descriptors combine all of the information of the image, giving equal importance to the background and the object of the query. We propose to use an object detection based on saliency models to identify relevant regions in the image and therefore obtain better image descriptors. We extend our proposal to multi-regional image representation and we combine our proposal with other spatial weighting measures. The descriptors derived from the salient regions improve the performance in three well known image retrieval datasets as we show in the experiments.