Person:
Villanueva Larre, Arantxa

Loading...
Profile Picture

Email Address

Birth Date

Research Projects

Organizational Units

Job Title

Last Name

Villanueva Larre

First Name

Arantxa

person.page.departamento

ORCID

0000-0001-9822-2530

person.page.upna

2247

Name

Search Results

Now showing 1 - 10 of 10
  • PublicationOpen Access
    SeTA: semiautomatic tool for annotation of eye tracking images
    (ACM, 2019) Larumbe Bergera, Andoni; Porta Cuéllar, Sonia; Cabeza Laguna, Rafael; Villanueva Larre, Arantxa; Ingeniería Eléctrica, Electrónica y de Comunicación; Ingeniaritza Elektrikoa, Elektronikoaren eta Telekomunikazio Ingeniaritzaren
    Availability of large scale tagged datasets is a must in the field of deep learning applied to the eye tracking challenge. In this paper, the potential of Supervised-Descent-Method (SDM) as a semiautomatic labelling tool for eye tracking images is shown. The objective of the paper is to evidence how the human effort needed for manually labelling large eye tracking datasets can be radically reduced by the use of cascaded regressors. Different applications are provided in the fields of high and low resolution systems. An iris/pupil center labelling is shown as example for low resolution images while a pupil contour points detection is demonstrated in high resolution. In both cases manual annotation requirements are drastically reduced.
  • PublicationOpen Access
    Synthetic gaze data augmentation for improved user calibration
    (Springer, 2021) Garde Lecumberri, Gonzalo; Larumbe Bergera, Andoni; Porta Cuéllar, Sonia; Cabeza Laguna, Rafael; Villanueva Larre, Arantxa; Ingeniaritza Elektrikoa, Elektronikoaren eta Telekomunikazio Ingeniaritzaren; Institute of Smart Cities - ISC; Ingeniería Eléctrica, Electrónica y de Comunicación
    In this paper, we focus on the calibration possibilitiesó of a deep learning based gaze estimation process applying transfer learning, comparing its performance when using a general dataset versus when using a gaze specific dataset in the pretrained model. Subject calibration has demonstrated to improve gaze accuracy in high performance eye trackers. Hence, we wonder about the potential of a deep learning gaze estimation model for subject calibration employing fine-tuning procedures. A pretrained Resnet-18 network, which has great performance in many computer vision tasks, is fine-tuned using user’s specific data in a few shot adaptive gaze estimation approach. We study the impact of pretraining a model with a synthetic dataset, U2Eyes, before addressing the gaze estimation calibration in a real dataset, I2Head. The results of the work show that the success of the individual calibration largely depends on the balance between fine-tuning and the standard supervised learning procedures and that using a gaze specific dataset to pretrain the model improves the accuracy when few images are available for calibration. This paper shows that calibration is feasible in low resolution scenarios providing outstanding accuracies below 1.5 ∘ ∘ of error.
  • PublicationOpen Access
    Fast and robust ellipse detection algorithm for head-mounted eye tracking systems
    (Springer, 2018) Martinikorena Aranburu, Ion; Cabeza Laguna, Rafael; Villanueva Larre, Arantxa; Urtasun, Iñaki; Larumbe Bergera, Andoni; Ingeniería Eléctrica, Electrónica y de Comunicación; Ingeniaritza Elektrikoa, Elektronikoaren eta Telekomunikazio Ingeniaritzaren
    In head-mounted eye tracking systems, the correct detection of pupil position is a key factor in estimating gaze direction. However, this is a challenging issue when the videos are recorded in real-world conditions, due to the many sources of noise and artifacts that exist in these scenarios, such as rapid changes in illumination, reflections, occlusions and an elliptical appearance of the pupil. Thus, it is an indispensable prerequisite that a pupil detection algorithm is robust in these challenging conditions. In this work, we present one pupil center detection method based on searching the maximum contribution point to the radial symmetry of the image. Additionally, two different center refinement steps were incorporated with the aim of adapting the algorithm to images with highly elliptical pupil appearances. The performance of the proposed algorithm is evaluated using a dataset consisting of 225,569 head-mounted annotated eye images from publicly available sources. The results are compared with the better algorithm found in the bibliography, with our algorithm being shown as superior.
  • PublicationOpen Access
    Supervised descent method (SDM) applied to accurate pupil detection in off-the-shelf eye tracking systems
    (ACM, 2018) Larumbe Bergera, Andoni; Cabeza Laguna, Rafael; Villanueva Larre, Arantxa; Ingeniería Eléctrica, Electrónica y de Comunicación; Ingeniaritza Elektrikoa, Elektronikoaren eta Telekomunikazio Ingeniaritzaren
    The precise detection of pupil/iris center is key to estimate gaze accurately. This fact becomes specially challenging in low cost frameworks in which the algorithms employed for high performance systems fail. In the last years an outstanding effort has been made in order to apply training-based methods to low resolution images. In this paper, Supervised Descent Method (SDM) is applied to GI4E database. The 2D landmarks employed for training are the corners of the eyes and the pupil centers. In order to validate the algorithm proposed, a cross validation procedure is performed. The strategy employed for the training allows us to affirm that our method can potentially outperform the state of the art algorithms applied to the same dataset in terms of 2D accuracy. The promising results encourage to carry on in the study of training-based methods for eye tracking.
  • PublicationOpen Access
    Low cost gaze estimation: knowledge-based solutions
    (IEEE, 2020) Martinikorena Aranburu, Ion; Larumbe Bergera, Andoni; Ariz Galilea, Mikel; Porta Cuéllar, Sonia; Cabeza Laguna, Rafael; Villanueva Larre, Arantxa; Ingeniería Eléctrica, Electrónica y de Comunicación; Ingeniaritza Elektrikoa, Elektronikoaren eta Telekomunikazio Ingeniaritzaren
    Eye tracking technology in low resolution scenarios is not a completely solved issue to date. The possibility of using eye tracking in a mobile gadget is a challenging objective that would permit to spread this technology to non-explored fields. In this paper, a knowledge based approach is presented to solve gaze estimation in low resolution settings. The understanding of the high resolution paradigm permits to propose alternative models to solve gaze estimation. In this manner, three models are presented: a geometrical model, an interpolation model and a compound model, as solutions for gaze estimation for remote low resolution systems. Since this work considers head position essential to improve gaze accuracy, a method for head pose estimation is also proposed. The methods are validated in an optimal framework, I2Head database, which combines head and gaze data. The experimental validation of the models demonstrates their sensitivity to image processing inaccuracies, critical in the case of the geometrical model. Static and extreme movement scenarios are analyzed showing the higher robustness of compound and geometrical models in the presence of user’s displacement. Accuracy values of about 3◦ have been obtained, increasing to values close to 5◦ in extreme displacement settings, results fully comparable with the state-of-the-art.
  • PublicationOpen Access
    Accurate pupil center detection in off-the-shelf eye tracking systems using convolutional neural networks
    (MDPI, 2021) Larumbe Bergera, Andoni; Garde Lecumberri, Gonzalo; Porta Cuéllar, Sonia; Cabeza Laguna, Rafael; Villanueva Larre, Arantxa; Ingeniería Eléctrica, Electrónica y de Comunicación; Ingeniaritza Elektrikoa, Elektronikoaren eta Telekomunikazio Ingeniaritzaren; Universidad Pública de Navarra / Nafarroako Unibertsitate Publikoa
    Remote eye tracking technology has suffered an increasing growth in recent years due to its applicability in many research areas. In this paper, a video-oculography method based on convolutional neural networks (CNNs) for pupil center detection over webcam images is proposed. As the first contribution of this work and in order to train the model, a pupil center manual labeling procedure of a facial landmark dataset has been performed. The model has been tested over both real and synthetic databases and outperforms state-of-the-art methods, achieving pupil center estimation errors below the size of a constricted pupil in more than 95% of the images, while reducing computing time by a 8 factor. Results show the importance of use high quality training data and well-known architectures to achieve an outstanding performance.
  • PublicationOpen Access
    Improved strategies for HPE employing learning-by-synthesis approaches
    (IEEE, 2018) Larumbe Bergera, Andoni; Ariz Galilea, Mikel; Bengoechea Irañeta, José Javier; Segura, Rubén; Cabeza Laguna, Rafael; Villanueva Larre, Arantxa; Ingeniería Eléctrica, Electrónica y de Comunicación; Ingeniaritza Elektrikoa, Elektronikoaren eta Telekomunikazio Ingeniaritzaren
    The first contribution of this paper is the presentation of a synthetic video database where the groundtruth of 2D facial landmarks and 3D head poses is available to be used for training and evaluating Head Pose Estimation (HPE) methods. The database is publicly available and contains videos of users performing guided and natural movements. The second and main contribution is the submission of a hybrid method for HPE based on Pose from Ortography and Scaling by Iterations (POSIT). The 2D landmark detection is performed using Random Cascaded-Regression Copse (R-CR-C). For the training stage we use, state of the art labeled databases. Learning-by-synthesis approach has been also used to augment the size of the database employing the synthetic database. HPE accuracy is tested by using two literature 3D head models. The tracking method proposed has been compared with state of the art methods using Supervised Descent Regressors (SDR) in terms of accuracy, achieving an improvement of 60%.
  • PublicationOpen Access
    U2Eyes: a binocular dataset for eye tracking and gaze estimation
    (IEEE, 2019) Porta Cuéllar, Sonia; Bossavit, Benoît; Cabeza Laguna, Rafael; Larumbe Bergera, Andoni; Garde Lecumberri, Gonzalo; Villanueva Larre, Arantxa; Ingeniería Eléctrica, Electrónica y de Comunicación; Ingeniaritza Elektrikoa, Elektronikoaren eta Telekomunikazio Ingeniaritzaren
    Theory shows that huge amount of labelled data are needed in order to achieve reliable classification/regression methods when using deep/machine learning techniques. However, in the eye tracking field, manual annotation is not a feasible option due to the wide variability to be covered. Hence, techniques devoted to synthesizing images show up as an opportunity to provide vast amounts of annotated data. Considering that the well-known UnityEyes tool provides a framework to generate single eye images and taking into account that both eyes information can contribute to improve gaze estimation accuracy we present U2Eyes dataset, that is publicly available. It comprehends about 6 million of synthetic images containing binocular data. Furthermore, the physiology of the eye model employed is improved, simplified dynamics of binocular vision are incorporated and more detailed 2D and 3D labelled data are provided. Additionally, an example of application of the dataset is shown as work in progress. Employing U2Eyes as training framework Supervised Descent Method (SDM) is used for eyelids segmentation. The model obtained as result of the training process is then applied on real images from GI4E dataset showing promising results.
  • PublicationOpen Access
    Low-cost eye tracking calibration: a knowledge-based study
    (MDPI, 2021) Garde Lecumberri, Gonzalo; Larumbe Bergera, Andoni; Bossavit, Benoît; Porta Cuéllar, Sonia; Cabeza Laguna, Rafael; Villanueva Larre, Arantxa; Ingeniería Eléctrica, Electrónica y de Comunicación; Ingeniaritza Elektrikoa, Elektronikoaren eta Telekomunikazio Ingeniaritzaren
    Subject calibration has been demonstrated to improve the accuracy in high-performance eye trackers. However, the true weight of calibration in off-the-shelf eye tracking solutions is still not addressed. In this work, a theoretical framework to measure the effects of calibration in deep learning-based gaze estimation is proposed for low-resolution systems. To this end, features extracted from the synthetic U2Eyes dataset are used in a fully connected network in order to isolate the effect of specific user’s features, such as kappa angles. Then, the impact of system calibration in a real setup employing I2Head dataset images is studied. The obtained results show accuracy improvements over 50%, probing that calibration is a key process also in low-resolution gaze estimation scenarios. Furthermore, we show that after calibration accuracy values close to those obtained by high-resolution systems, in the range of 0.7°, could be theoretically obtained if a careful selection of image features was performed, demonstrating significant room for improvement for off-the-shelf eye tracking systems
  • PublicationOpen Access
    Gaze estimation problem tackled through synthetic images
    (Association for Computing Machinery (ACM), 2020) Garde Lecumberri, Gonzalo; Larumbe Bergera, Andoni; Bossavit, Benoît; Cabeza Laguna, Rafael; Porta Cuéllar, Sonia; Villanueva Larre, Arantxa; Ingeniería Eléctrica, Electrónica y de Comunicación; Ingeniaritza Elektrikoa, Elektronikoaren eta Telekomunikazio Ingeniaritzaren
    In this paper, we evaluate a synthetic framework to be used in the field of gaze estimation employing deep learning techniques. The lack of sufficient annotated data could be overcome by the utilization of a synthetic evaluation framework as far as it resembles the behavior of a real scenario. In this work, we use U2Eyes synthetic environment employing I2Head datataset as real benchmark for comparison based on alternative training and testing strategies. The results obtained show comparable average behavior between both frameworks although significantly more robust and stable performance is retrieved by the synthetic images. Additionally, the potential of synthetically pretrained models in order to be applied in user's specific calibration strategies is shown with outstanding performances.