Forcén Carvalho, Juan Ignacio
Loading...
Email Address
person.page.identifierURI
Birth Date
Job Title
Last Name
Forcén Carvalho
First Name
Juan Ignacio
person.page.departamento
Estadística, Informática y Matemáticas
person.page.instituteName
ORCID
person.page.observainves
person.page.upna
Name
- Publications
- item.page.relationships.isAdvisorOfPublication
- item.page.relationships.isAdvisorTFEOfPublication
- item.page.relationships.isAuthorMDOfPublication
2 results
Search Results
Now showing 1 - 2 of 2
Publication Open Access Learning ordered pooling weights in image classification(Elsevier, 2020) Forcén Carvalho, Juan Ignacio; Pagola Barrio, Miguel; Barrenechea Tartas, Edurne; Bustince Sola, Humberto; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika; Universidad Pública de Navarra / Nafarroako Unibertsitate PublikoaSpatial pooling is an important step in computer vision systems like Convolutional Neural Networks or the Bag-of-Words method. The spatial pooling purpose is to combine neighbouring descriptors to obtain a single descriptor for a given region (local or global). The resultant combined vector must be as discriminant as possible, in other words, must contain relevant information, while removing irrelevant and confusing details. Maximum and average are the most common aggregation functions used in the pooling step. To improve the aggregation of relevant information without degrading their discriminative power for image classification, we introduce a simple but effective scheme based on Ordered Weighted Average (OWA) aggregation operators. We present a method to learn the weights of the OWA aggregation operator in a Bag-of-Words framework and in Convolutional Neural Networks, and provide an extensive evaluation showing that OWA based pooling outperforms classical aggregation operators.Publication Open Access Co-occurrence of deep convolutional features for image search(Elsevier, 2020) Forcén Carvalho, Juan Ignacio; Pagola Barrio, Miguel; Barrenechea Tartas, Edurne; Bustince Sola, Humberto; Estatistika, Informatika eta Matematika; Institute of Smart Cities - ISC; Estadística, Informática y MatemáticasImage search can be tackled using deep features from pre-trained Convolutional Neural Networks (CNN). The feature map from the last convolutional layer of a CNN encodes descriptive information from which a discriminative global descriptor can be obtained. We propose a new representation of co-occurrences from deep convolutional features to extract additional relevant information from this last convolutional layer. Combining this co-occurrence map with the feature map, we achieve an improved image representation. We present two different methods to get the co-occurrence representation, the first one based on direct aggregation of activations, and the second one, based on a trainable co-occurrence representation. The image descriptors derived from our methodology improve the performance in very well-known image retrieval datasets as we prove in the experiments.