Pagola Barrio, Miguel
Loading...
Email Address
person.page.identifierURI
Birth Date
Job Title
Last Name
Pagola Barrio
First Name
Miguel
person.page.departamento
Estadística, Informática y Matemáticas
person.page.instituteName
ISC. Institute of Smart Cities
ORCID
person.page.observainves
person.page.upna
Name
- Publications
- item.page.relationships.isAdvisorOfPublication
- item.page.relationships.isAdvisorTFEOfPublication
- item.page.relationships.isAuthorMDOfPublication
2 results
Search Results
Now showing 1 - 2 of 2
Publication Open Access Co-occurrence of deep convolutional features for image search(Elsevier, 2020) Forcén Carvalho, Juan Ignacio; Pagola Barrio, Miguel; Barrenechea Tartas, Edurne; Bustince Sola, Humberto; Estatistika, Informatika eta Matematika; Institute of Smart Cities - ISC; Estadística, Informática y MatemáticasImage search can be tackled using deep features from pre-trained Convolutional Neural Networks (CNN). The feature map from the last convolutional layer of a CNN encodes descriptive information from which a discriminative global descriptor can be obtained. We propose a new representation of co-occurrences from deep convolutional features to extract additional relevant information from this last convolutional layer. Combining this co-occurrence map with the feature map, we achieve an improved image representation. We present two different methods to get the co-occurrence representation, the first one based on direct aggregation of activations, and the second one, based on a trainable co-occurrence representation. The image descriptors derived from our methodology improve the performance in very well-known image retrieval datasets as we prove in the experiments.Publication Open Access Aggregation of deep features for image retrieval based on object detection(Springer, 2019-09-22) Forcén Carvalho, Juan Ignacio; Pagola Barrio, Miguel; Barrenechea Tartas, Edurne; Bustince Sola, Humberto; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika; Institute of Smart Cities - ISC; Universidad Pública de Navarra / Nafarroako Unibertsitate PublikoaImage retrieval can be tackled using deep features from pretrained Convolutional Neural Networks (CNN). The feature map from the last convolutional layer of a CNN encodes descriptive information from which a discriminative global descriptor can be obtained. However, this global descriptors combine all of the information of the image, giving equal importance to the background and the object of the query. We propose to use an object detection based on saliency models to identify relevant regions in the image and therefore obtain better image descriptors. We extend our proposal to multi-regional image representation and we combine our proposal with other spatial weighting measures. The descriptors derived from the salient regions improve the performance in three well known image retrieval datasets as we show in the experiments.