Morató Osés, Daniel

Loading...
Profile Picture

Email Address

Birth Date

Job Title

Last Name

Morató Osés

First Name

Daniel

person.page.departamento

Ingeniería Eléctrica, Electrónica y de Comunicación

person.page.instituteName

ISC. Institute of Smart Cities

person.page.observainves

person.page.upna

Name

Search Results

Now showing 1 - 10 of 65
  • PublicationOpen Access
    Analysis and stochastic characterization of TCP flows
    (Springer, 2000) Aracil Rico, Javier; Morató Osés, Daniel; Izal Azcárate, Mikel; Automática y Computación; Automatika eta Konputazioa
    Since the most Internet services use TCP as a transport protocol there is a growing interest in the characterization of TCP flows. However, the flow characteristics depend on a large number of factors, due to the complexity of the TCP. As a result, the TCS characteristics are normally studies by means of simulations or controlled network setups. In this paper we propose a TCP characterization based on a generic model based of stochastic flow with burstiness and throughput (((σ, ρ)-constraints), which is useful in order to characterize flows in ATM and other flow-switched networks. The model is obtained through extensive analysis of a real traffic trace, comprising an approximate number of 1,500 hosts and 1,700,000 TCP connections. The results suggests that TCP connections in the wide area Internet have low throughput while the packet bursts do not suffer an exponential increase, as indicated by the slow-start behavior. On the other hand, the impact of the connection establishment phase is striking. We note that the throughput of the TCP flow is approximately half the throughput which is obtained in the data transfer phase, namely after the connection has been established.
  • PublicationOpen Access
    A popularity-aware method for discovering server IP addresses related to websites
    (IEEE, 2013) Torres García, Luis Miguel; Magaña Lizarrondo, Eduardo; Izal Azcárate, Mikel; Morató Osés, Daniel; Automática y Computación; Automatika eta Konputazioa
    The complexity of web traffic has grown in the past years as websites evolve and new services are provided over the HTTP protocol. When accessing a website, multiple connections to different servers are opened and it is usually difficult to distinguish which servers are related to which sites. However, this information is useful from the perspective of security and accounting and can also help to label web traffic and use it as ground truth for traffic classification systems. In this paper we present a method to discover server IP addresses related to specific websites in a traffic trace. Our method uses NetFlow-type records which makes it scalable and impervious to encryption of packet payloads. It is, moreover, popularity-aware in the sense that it takes into consideration the differences in the number of accesses to each site in order to provide a better identification of servers. The method can be used to gather data from a group of interesting websites or, by applying it to a representative set of websites, it can label a sizeable number of connections in a packet trace.
  • PublicationOpen Access
    Online classification of user activities using machine learning on network traffic
    (Elsevier, 2020) Labayen Guembe, Víctor; Magaña Lizarrondo, Eduardo; Morató Osés, Daniel; Izal Azcárate, Mikel; Ingeniería Eléctrica, Electrónica y de Comunicación; Ingeniaritza Elektrikoa, Elektronikoaren eta Telekomunikazio Ingeniaritzaren
    The daily deployment of new applications, along with the exponential increase in network traffic, entails a growth in the complexity of network analysis and monitoring. Conversely, the increasing availability and decreasing cost of computational capacity have increased the popularity and usability of machine learning algorithms. In this paper, a system for classifying user activities from network traffic using both supervised and unsupervised learning is proposed. The system uses the behaviour exhibited over the network and classifies the underlying user activity, taking into consideration all of the traffic generated by the user within a given time window. Those windows are characterised with features extracted from the network and transport layer headers in the traffic flows. A three-layer model is proposed to perform the classification task. The first two layers of the model are implemented using K-Means, while the last one uses a Random Forest to obtain the activity labels. An average accuracy of 97.37% is obtained, with values of precision and recall that allow online classification of network traffic for Quality of Service (QoS) and user profiling, outperforming previous proposals.
  • PublicationOpen Access
    Analysis of Internet services in IP over ATM networks
    (IEEE, 1999) Aracil Rico, Javier; Morató Osés, Daniel; Izal Azcárate, Mikel; Automática y Computación; Automatika eta Konputazioa
    This paper presents a trace-driven analysis of IP over ATM services from a user-perceived quality of service standpoint. QoS parameters such as the sustained throughput for transactional services and other ATM layer parameters such as the burstiness (MBS) per connection are derived. On the other hand, a macroscopic analysis that comprises percentage of flows and bytes per service, TCP transaction duration and mean bytes transferred in both ways is also presented. The traffic trace is obtained with a novel measurement equipment that combines a header extraction hardware and a high end UNIX workstation capable of providing a timestamp accuracy in the order of microseconds. The ATM link under analysis concentrates traffic from a large population of 1,500 hosts from Public University of Navarra campus network, that produce 1,700,000 TCP connections approximately in the measurement period of one week. The results obtained from such a wealth of data suggest that QoS is primarily determined by transport protocols and not by ATM bandwidth. The sustained throughput of TCP connections never grows beyond 80 Kbps with 70% probability in the data transfer phase (i. e., in the ESTABLISHED state) and we observe a strong influence of the connection establishment phase in the user-perceived throughput. On the other hand, the burstiness of individual TCP connections is rather small, namely TCP connections do not produce bursts according to the geometric law given by slow start and commonly assumed in previously published studies.
  • PublicationOpen Access
    Effective burst preemption in OBS network
    (IEEE, 2006) Klinkowski, Miroslaw; Careglio, Davide; Morató Osés, Daniel; Solé Pareta, Josep; Automática y Computación; Automatika eta Konputazioa
    Burst preemption is the most effective technique to provide Quality of Service (QoS) differentiation in Optical Burst Switching (OBS) networks. Nonetheless, in conventional OBS architectures, when preemption happens the control packet corresponding to the preempted burst continues its travel to the destination node reserving resources at each node of the path. Therefore, an additional signaling procedure should be carried out to release these unnecessary reservations. In this paper we present novel control architecture to efficiently apply burst preemption without the need of the signaling procedure. Analytical and simulation results prove the effectiveness of this proposal.
  • PublicationOpen Access
    Validation of HTTP response time from network traffic as an alternative to web browser instrumentation
    (IEEE, 2021) López Romera, Carlos; Morató Osés, Daniel; Magaña Lizarrondo, Eduardo; Izal Azcárate, Mikel; Ingeniaritza Elektrikoa, Elektronikoaren eta Telekomunikazio Ingeniaritzaren; Institute of Smart Cities - ISC; Ingeniería Eléctrica, Electrónica y de Comunicación
    The measurement of response time in hypertext transfer protocol (HTTP) requests is the most basic proxy measurement method for evaluating web browsing quality. It is used in the research literature and in application performance measurement instruments. During the development of a website, response time is obtained from in-browser measurements. After the website has been deployed, network traffic is used to continuously monitor activity, and the measurement data are used for service management and planning. In this study, we evaluate the accuracy of the measurements obtained from network traffic by comparing them with the in-browser measurement of resource load time. We evaluate the response times for encrypted and clear-text requests in an emulated network environment, in a laboratory deployment equivalent to a data centre network, and accessing popular web sites on the public Internet. The accuracy for response time measurements obtained from network traffic is noticeable higher for Internet long distance paths than for lowdelay paths (below 20 ms round-trip). The overhead of traffic encryption in secure HTTP requests has a negative effect on measurement accuracy, and we find relative measurement errors higher than 70% when using network traffic to infer HTTP response times compared
  • PublicationOpen Access
    A survey on detection techniques for cryptographic ransomware
    (IEEE, 2019) Berrueta Irigoyen, Eduardo; Morató Osés, Daniel; Magaña Lizarrondo, Eduardo; Izal Azcárate, Mikel; Ingeniaritza Elektrikoa, Elektronikoaren eta Telekomunikazio Ingeniaritzaren; Institute of Smart Cities - ISC; Ingeniería Eléctrica, Electrónica y de Comunicación
    Crypto-ransomware is a type of malware that encrypts user files, deletes the original data, and asks for a ransom to recover the hijacked documents. It is a cyber threat that targets both companies and residential users, and has spread in recent years because of its lucrative results. Several articles have presented classifications of ransomware families and their typical behaviour. These insights have stimulated the creation of detection techniques for antivirus and firewall software. However, because the ransomware scene evolves quickly and aggressively, these studies quickly become outdated. In this study, we surveyed the detection techniques that the research community has developed in recent years. We compared the different approaches and classified the algorithms based on the input data they obtain from ransomware actions, and the decision procedures they use to reach a classification decision between benign or malign applications. This is a detailed survey that focuses on detection algorithms, compared to most previous studies that offer a survey of ransomware families or isolated proposals of detection algorithms. We also compared the results of these proposals.
  • PublicationOpen Access
    Predicción de tráfico de Internet and aplicaciones
    (2001) Bernal, I.; Aracil Rico, Javier; Morató Osés, Daniel; Izal Azcárate, Mikel; Magaña Lizarrondo, Eduardo; Díez Marca, L. A.; Automática y Computación; Automatika eta Konputazioa
    In this paper we focus on traffic prediction as a means to achieve dynamic bandwidth allocation in a generic Internet link. Our findings show that coarse prediction (bytes per interval) proves advantageous to perform dynamic link dimensioning, even if we consider a part of the top traffic producers in the traffic predictor.
  • PublicationOpen Access
    Aproximación al modelado y predicción de tráfico de Internet como múltiplex de conexión de transporte
    (2001) Morató Osés, Daniel; Aracil Rico, Javier; Automática y Computación; Automatika eta Konputazioa
    El tráfico de datos en la Internet actual presenta un nuevo reto de caracterización y modelado para el correcto dimensionamiento de los equipos y enlaces que conforman la llamada “red de redes”. En este trabajo presentamos una revisión de los modelos propuestos hasta la fecha, lo cual nos lleva desde los límites de la telefonía clásica hasta los conceptos de dependencia a largo plazo y autosimilitud. A partir de estos modelos abordamos la caracterización de una gran población de usuarios de Internet. Para ello nos hemos basado en trazas del tráfico del enlace IP sobre ATM para acceso a Internet en la Universidad Pública de Navarra. Dichas trazas han sido obtenidas mediante una novedosa herramienta de monitorización de enlaces ATM. Con estas trazas presentamos un análisis macroscópico de protocolos y servicios en el enlace que nos muestra a TCP como principal protocolo y al Web como el servicio más utilizado al suponer más de tres cuartas partes del tráfico generado. A la vista de la predominancia de estas conexiones TCP realizamos una caracterización en base a procesos estocásticos para el múltiples de flujos TCP. Dicha caracterización se fundamenta en varias características observadas del tráfico, concretamente que la tasa de las conexiones TCP depende fuertemente del retardo extremo a extremo (RTT) de la conexión y que la intermitencia de las mismas no sigue la progresión exponencial que se esperaría del algoritmo slow-start. Esto nos lleva a un modelo mediante restricciones (σ, ρ) que permite el empleo de tecnologías de conmutación de circuitos para la reserva de ancho de banda por flujo. Con el conocimiento obtenido del funcionamiento de los flujos TCP en la red actual realizamos una revisión del modelo M/G/∞ de flujos. Este es uno de los modelos más empleados tanto para generación de tráfico sintético de datos como en el estudio analítico de las características del mismo. Confirmamos dos de las hipótesis en que se basa (proceso de llegadas de Poisson y duraciones de flujos con varianza infinita), pero vemos que la hipótesis de tasa constante de transferencia dista de lo observado en el tráfico real. Por ello proponemos una alteración del modelo mediante la incorporación de una variable aleatoria de Weibull para la tasa de flujos. Esta modificación permite que el tráfico resultante ajuste mejor la variabilidad de la marginal del mismo. El modelo M/G/∞ clásico subestima la variabilidad del tráfico aunque modele correctamente su dependencia a largo plazo. Mostramos, sin embargo, que en futuras redes de alta velocidad que formarán la próxima generación de internet el efecto de dependencia a largo plazo tenderá a desaparecer a costa de un aumento en la variabilidad del tráfico, que pasará a ser el factor que condicione las prestaciones de la red. Esto último es un fuerte apoyo para aceptar modificaciones del modelo en la línea de la propuesta. Finalmente, empleamos la caracterización de flujos TCP obtenida para proponer un algoritmo de estimación de ancho de banda basado en el RTT de las conexiones. La estimación está orientada a la reserva de ancho de banda en enlaces de proveedores de acceso a Internet. Los resultados muestran que la estimación en base a parámetros conocidos a priori es realizable y mejora los resultados obtenidos con asignadores basados en tasa de pico, asignaciones estáticas o best-effort. Esto abre numerosas posibilidades de estudio de algoritmos de asignación así como de cálculo dinámico de los parámetros de los mismos.
  • PublicationOpen Access
    Evaluation of RTT as an estimation of interactivity time for QoE evaluation in remote desktop environments
    (IEEE, 2023) Arellano Usón, Jesús; Magaña Lizarrondo, Eduardo; Morató Osés, Daniel; Izal Azcárate, Mikel; Ingeniería Eléctrica, Electrónica y de Comunicación; Ingeniaritza Elektrikoa, Elektronikoa eta Telekomunikazio Ingeniaritza
    In recent years, there has been a notable surge in the utilization of remote desktop services, largely driven by the emergence of new remote work models introduced during the pandemic. Traditional evaluation of the quality of experience (QoE) of users in remote desktop environments has relied on measures such as round-trip time (RTT). However, these measures are insufficient to capture all the factors that influence QoE. This study evaluated RTT and interactivity time in an enterprise environment over a period of 6 months and analysed the suitability of using RTT drawing previously unexplored connections between RTT, interactivity, and QoE. The results indicate that RTT is an insufficient indicator of QoE in productive environments with low RTT values. We outline some precise measures of interactivity needed to capture all the factors that contribute to QoE in remote desktop environments.