Forcén Carvalho, Juan IgnacioPagola Barrio, MiguelBarrenechea Tartas, EdurneBustince Sola, Humberto2021-01-272022-05-0120200262-885610.1016/j.imavis.2020.103909https://academica-e.unavarra.es/handle/2454/39071Image search can be tackled using deep features from pre-trained Convolutional Neural Networks (CNN). The feature map from the last convolutional layer of a CNN encodes descriptive information from which a discriminative global descriptor can be obtained. We propose a new representation of co-occurrences from deep convolutional features to extract additional relevant information from this last convolutional layer. Combining this co-occurrence map with the feature map, we achieve an improved image representation. We present two different methods to get the co-occurrence representation, the first one based on direct aggregation of activations, and the second one, based on a trainable co-occurrence representation. The image descriptors derived from our methodology improve the performance in very well-known image retrieval datasets as we prove in the experiments.30 p.application/pdfeng© 2020 Elsevier B.V. This manuscript version is made available under the CC-BY-NC-ND 4.0.Co-occurrenceImage retrievalFeature aggregationPoolingCo-occurrence of deep convolutional features for image searchinfo:eu-repo/semantics/articleinfo:eu-repo/semantics/openAccess