Bustince Sola, Humberto

Loading...
Profile Picture

Email Address

Birth Date

Job Title

Last Name

Bustince Sola

First Name

Humberto

person.page.departamento

Estadística, Informática y Matemáticas

person.page.instituteName

ISC. Institute of Smart Cities

person.page.observainves

person.page.upna

Name

Search Results

Now showing 1 - 10 of 205
  • PublicationOpen Access
    N-dimensional admissibly ordered interval-valued overlap functions and its influence in interval-valued fuzzy rule-based classification systems
    (IEEE, 2021) Da Cruz Asmus, Tiago; Sanz Delgado, José Antonio; Pereira Dimuro, Graçaliz; Bedregal, Benjamin; Fernández Fernández, Francisco Javier; Bustince Sola, Humberto; Estatistika, Informatika eta Matematika; Institute of Smart Cities - ISC; Estadística, Informática y Matemáticas
    Overlap functions are a type of aggregation functions that are not required to be associative, generally used to indicate the overlapping degree between two values. They have been successfully used as a conjunction operator in several practical problems, such as fuzzy rulebased classification systems (FRBCSs) and image processing. Some extensions of overlap functions were recently proposed, such as general overlap functions and, in the interval-valued context, n-dimensional interval-valued overlap functions. The latter allow them to be applied in n-dimensional problems with interval-valued inputs, like interval-valued classification problems, where one can apply interval-valued FRBCSs (IV-FRBCSs). In this case, the choice of an appropriate total order for intervals, like an admissible order, can play an important role. However, neither the relationship between the interval order and the n-dimensional interval-valued overlap function (which may or may not be increasing for that order) nor the impact of this relationship in the classification process have been studied in the literature. Moreover, there is not a clear preferred n-dimensional interval-valued overlap function to be applied in an IV-FRBCS. Hence, in this paper we: (i) present some new results on admissible orders, which allow us to introduce the concept of n-dimensional admissibly ordered interval-valued overlap functions, that is, n-dimensional interval-valued overlap functions that are increasing with respect to an admissible order; (ii) develop a width-preserving construction method for this kind of function, derived from an admissible order and an n-dimensional overlap function, discussing some of its features; (iii) analyze the behaviour of several combinations of admissible orders and n-dimensional (admissibly ordered) interval-valued overlap functions when applied in IV-FRBCSs. All in all, the contribution of this paper resides in pointing out the effect of admissible orders and n-dimensional admissibly ordered interval-valued overlap functions, both from a theoretical and applied points of view, the latter when considering classification problems.
  • PublicationOpen Access
    A supervised fuzzy measure learning algorithm for combining classifiers
    (Elsevier, 2023) Uriz Martín, Mikel Xabier; Paternain Dallo, Daniel; Bustince Sola, Humberto; Galar Idoate, Mikel; Institute of Smart Cities - ISC; Universidad Pública de Navarra / Nafarroako Unibertsitate Publikoa
    Fuzzy measure-based aggregations allow taking interactions among coalitions of the input sources into account. Their main drawback when applying them in real-world problems, such as combining classifier ensembles, is how to define the fuzzy measure that governs the aggregation and specifies the interactions. However, their usage for combining classifiers has shown its advantage. The learning of the fuzzy measure can be done either in a supervised or unsupervised manner. This paper focuses on supervised approaches. Existing supervised approaches are designed to minimize the mean squared error cost function, even for classification problems. We propose a new fuzzy measure learning algorithm for combining classifiers that can optimize any cost function. To do so, advancements from deep learning frameworks are considered such as automatic gradient computation. Therefore, a gradient-based method is presented together with three new update policies that are required to preserve the monotonicity constraints of the fuzzy measures. The usefulness of the proposal and the optimization of cross-entropy cost are shown in an extensive experimental study with 58 datasets corresponding to both binary and multi-class classification problems. In this framework, the proposed method is compared with other state-of-the-art methods for fuzzy measure learning.
  • PublicationOpen Access
    CFM-BD: a distributed rule induction algorithm for building compact fuzzy models in Big Data classification problems
    (IEEE, 2020) Elkano Ilintxeta, Mikel; Sanz Delgado, José Antonio; Barrenechea Tartas, Edurne; Bustince Sola, Humberto; Galar Idoate, Mikel; Estatistika, Informatika eta Matematika; Institute of Smart Cities - ISC; Estadística, Informática y Matemáticas
    Interpretability has always been a major concern for fuzzy rule-based classifiers. The usage of human-readable models allows them to explain the reasoning behind their predictions and decisions. However, when it comes to Big Data classification problems, fuzzy rule based classifiers have not been able to maintain the good tradeoff between accuracy and interpretability that has characterized these techniques in non-Big-Data environments. The most accurate methods build models composed of a large number of rules and fuzzy sets that are too complex, while those approaches focusing on interpretability do not provide state-of-the-art discrimination capabilities. In this paper, we propose a new distributed learning algorithm named CFM-BD to construct accurate and compact fuzzy rule-based classification systems for Big Data. This method has been specifically designed from scratch for Big Data problems and does not adapt or extend any existing algorithm. The proposed learning process consists of three stages: Preprocessing based on the probability integral transform theorem; rule induction inspired by CHI-BD and Apriori algorithms; and rule selection by means of a global evolutionary optimization. We conducted a complete empirical study to test the performance of our approach in terms of accuracy, complexity, and runtime. The results obtained were compared and contrasted with four state-of-the-art fuzzy classifiers for Big Data (FBDT, FMDT, Chi-Spark-RS, and CHI-BD). According to this study, CFM-BD is able to provide competitive discrimination capabilities using significantly simpler models composed of a few rules of less than three antecedents, employing five linguistic labels for all variables.
  • PublicationOpen Access
    The null space of fuzzy inclusion measures
    (IEEE, 2019) Couso, Inés; Bustince Sola, Humberto; Fernández Fernández, Francisco Javier; Sánchez, Luciano; Estatistika, Informatika eta Matematika; Institute of Smart Cities - ISC; Estadística, Informática y Matemáticas
    Some formal relationships between the different axiomatic definitions of inclusion measure are analysed. In particular, the links between the different proposals about the null-space (the collection of pairs associated with a null degree of inclusion) are studied. Taking as starting point the well-known axiomatics of Kitainik and Sinha-Dougherty, we observe that other alternative proposals about the null-space are incompatible with both the null-space and the decomposition axioms of these authors. We also conclude that both the axiomatics of Kitainik and that of Sinha-Dougherty contain certain redundancies. Reduced equivalent lists of axioms are proposed.
  • PublicationOpen Access
    A fusion method for multi-valued data
    (Elsevier, 2021) Papčo, Martin; Rodríguez Martínez, Iosu; Fumanal Idocin, Javier; Altalhi, A. H.; Bustince Sola, Humberto; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika
    In this paper we propose an extension of the notion of deviation-based aggregation function tailored to aggregate multidimensional data. Our objective is both to improve the results obtained by other methods that try to select the best aggregation function for a particular set of data, such as penalty functions, and to reduce the temporal complexity required by such approaches. We discuss how this notion can be defined and present three illustrative examples of the applicability of our new proposal in areas where temporal constraints can be strict, such as image processing, deep learning and decision making, obtaining favourable results in the process.
  • PublicationOpen Access
    Funções de agregação baseadas em integral de Choquet aplicadas em redimensionalização de imagens
    (Universidade Passo Fundo, 2019) Bueno, Jéssica C. S.; Dias, Camila A.; Pereira Dimuro, Graçaliz; Borges, Eduardo N.; Botelho, Silvia S. C.; Mattos, Viviane L. D. de; Bustince Sola, Humberto; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika
    The increasing data volume, coupled with the high complexity of these data, has generated the need to develop increasingly efficient knowledge extraction techniques, both in computational cost and precision. Most of the problems that are addressed by these techniques have complex information to be identified. For this, machine learning methods are used, where these methods use a variety of functions inside the different steps that are employed in their architectures. One of these consists in the use of aggregation functions to resize images. In this context, a study of aggregation functions based on the Choquet integral is presented, where the main feature of Choquet integral, in comparison with other aggregation functions, resides in the fact that it considers, through the fuzzy measure, the interaction between the elements to be aggregated. Thus, an evaluation study of the performance of the standard Choquet integral functions is presented (Choquet integral based on Copula in relation to the maximum and average functions) looking for results that may be better than the usual applied aggregation functions. The results of such comparisons are promising when evaluated through measures of image quality.
  • PublicationOpen Access
    A survey of fingerprint classification Part I: taxonomies on feature extraction methods and learning models
    (Elsevier, 2015) Galar Idoate, Mikel; Derrac, Joaquín; Peralta, Daniel; Triguero, Isaac; Paternain Dallo, Daniel; López Molina, Carlos; García, Salvador; Benítez, José Manuel; Pagola Barrio, Miguel; Barrenechea Tartas, Edurne; Bustince Sola, Humberto; Herrera, Francisco; Automática y Computación; Automatika eta Konputazioa
    This paper reviews the fingerprint classification literature looking at the problem from a double perspective. We first deal with feature extraction methods, including the different models considered for singular point detection and for orientation map extraction. Then, we focus on the different learning models considered to build the classifiers used to label new fingerprints. Taxonomies and classifications for the feature extraction, singular point detection, orientation extraction and learning methods are presented. A critical view of the existing literature have led us to present a discussion on the existing methods and their drawbacks such as difficulty in their reimplementation, lack of details or major differences in their evaluations procedures. On this account, an experimental analysis of the most relevant methods is carried out in the second part of this paper, and a new method based on their combination is presented.
  • PublicationOpen Access
    Mixture functions and their monotonicity
    (Elsevier, 2019) Špirková, Jana; Beliakov, Gleb; Bustince Sola, Humberto; Fernández Fernández, Francisco Javier; Estatistika, Informatika eta Matematika; Institute of Smart Cities - ISC; Estadística, Informática y Matemáticas
    We consider mixture functions, which are a type of weighted averages for which the corresponding weights are calculated by means of appropriate continuous functions of their inputs. In general, these mixture function need not be monotone increasing. For this reason we study su cient conditions to ensure standard, weak and directional monotonicity for speci c types of weighting functions. We also analyze directional monotonicity when di erentiability is assumed.
  • PublicationOpen Access
    Learning ordered pooling weights in image classification
    (Elsevier, 2020) Forcén Carvalho, Juan Ignacio; Pagola Barrio, Miguel; Barrenechea Tartas, Edurne; Bustince Sola, Humberto; Estadística, Informática y Matemáticas; Estatistika, Informatika eta Matematika; Universidad Pública de Navarra / Nafarroako Unibertsitate Publikoa
    Spatial pooling is an important step in computer vision systems like Convolutional Neural Networks or the Bag-of-Words method. The spatial pooling purpose is to combine neighbouring descriptors to obtain a single descriptor for a given region (local or global). The resultant combined vector must be as discriminant as possible, in other words, must contain relevant information, while removing irrelevant and confusing details. Maximum and average are the most common aggregation functions used in the pooling step. To improve the aggregation of relevant information without degrading their discriminative power for image classification, we introduce a simple but effective scheme based on Ordered Weighted Average (OWA) aggregation operators. We present a method to learn the weights of the OWA aggregation operator in a Bag-of-Words framework and in Convolutional Neural Networks, and provide an extensive evaluation showing that OWA based pooling outperforms classical aggregation operators.
  • PublicationOpen Access
    The interval-valued Choquet integral based on admissible permutations
    (IEEE, 2018) Paternain Dallo, Daniel; Miguel Turullols, Laura de; Ochoa Lezaun, Gustavo; Lizasoain Iriso, María Inmaculada; Mesiar, Radko; Bustince Sola, Humberto; Estatistika, Informatika eta Matematika; Institute of Smart Cities - ISC; Institute for Advanced Materials and Mathematics - INAMAT2; Estadística, Informática y Matemáticas; Universidad Pública de Navarra / Nafarroako Unibertsitate Publikoa
    Aggregation or fusion of interval data is not a trivial task, since the necessity of arranging data arises in many aggregation functions, such as OWA operators or the Choquet integral. Some arranging procedures have been given to solve this problem, but they need certain parameters to be set. In order to solve this problem, in this work we propose the concept of an admissible permutation of intervals. Based on this concept, which avoids any parameter selection, we propose a new approach for the interval-valued Choquet integral that takes into account every possible permutation fitting to the considered ordinal structure of data. Finally, a consensus among all the permutations is constructed.