Wednesday, 13 December 2017

Session 15: Performance Analysis and Benchmarking III

Deriving Coloured Generalised Stochastic Petri Net Performance Models from High-Precision Location Tracking Data

Authors:

Nikolas Anastasiou (Imperial College London)
William Knottenbelt (Imperial College London)

Abstract:

Stochastic performance models are widely used to analyse systems that involve the flow and processing of customers and resources. However, model formulation and parameterisation are traditionally manual and thus expensive, intrusive and error-prone. Our earlier work has demonstrated the feasibility of automated performance model construction from location tracking data. In particular, we presented a methodology based on a four-stage data processing pipeline, which automatically constructs Generalised Stochastic Petri Net (GSPN) performance models from an input dataset of raw location tracking traces. This pipeline was enhanced with a presence-based synchronisation detection mechanism.

In this paper we introduce Coloured Generalised Stochastic Petri Nets (CGSPNs) into our methodology to provide support for multiple customer classes and service cycles. Distinct token types are used to model customers of different classes, while Johnson’s algorithm for enumerating elementary cycles in a directed graph is employed to detect service cycles. Coloured tokens are also used to enforce accurate customer routing after the completion of a service cycle. We evaluate these extensions and their integration into the methodology via a case study of a simplified model of an Accident and Emergency (A&E) department. The case study is based on synthetic location tracking data, generated using an extended version of the LocTrackJINQS location-aware queueing network simulator.

DOI: 10.1145/2479871.2479931

Full text: PDF

[#][]

Resource Availability Based Performance Benchmarking of Virtual Machine Migrations

Authors:

Senthil Nathan (Indian Institute of Technology, Bombay)
Purushottam Kulkarni (Indian Institute of Technology, Bombay)
Umesh Bellur (Indian Institute of Technology, Bombay)

Abstract:

Virtual machine migration enables load balancing, hot spot mitigation and server consolidation in virtualized environments. Live VM migration can be of two types - adaptive, in which the rate of page transfer adapts to virtual machine behavior (mainly page dirty rate), and non-adaptive, in which the VM pages are transferred at a maximum possible network rate. In either method, migration requires a significant amount of CPU and network resources, which can seriously impact the performance of both the VM being migrated as well as other VMs. This calls for building a good understanding of the performance of migration itself and the resource needs of migration. Such an understanding can help select the appropriate VMs for migration while at the same time allocating the appropriate amount of resources for migration. While several empirical studies exist, a comprehensive evaluation of migration techniques with resource availability constraints is missing. As a result, it is not clear as to which migration technique to employ under a given set of conditions. In this work, we conduct a comprehensive empirical study to understand the sensitivity of migration performance to resource availability and other system parameters (like page dirty rate and VM size). The empirical study (with the Xen Hypervisor) reveals several shortcomings of the migration process. We propose several fixes and develop the Improved Live Migration technique (ILM) to overcome these shortcomings. Over a set of workloads used to evaluate ILM, the network traffic for migration was reduced by 14-93% and the migration time was reduced by 34-87% compared to the vanilla live migration technique. We also quantified the impact of migration on the performance of applications running on the migrating VM and other co-located VMs.

DOI: 10.1145/2479871.2479932

Full text: PDF

[#][]

Overcoming Memory Limitations in High-Throughput Event-Based Applications

Authors:

Marcelo R. N. Mendes (University of Coimbra)
Pedro Bizarro (University of Coimbra)
Paulo Marques (University of Coimbra)

Abstract:

The last decade has witnessed the emergence of business critical applications processing streaming data for domains as diverse as credit card fraud detection, real-time recommendation systems, call-center monitoring, ad selection, network monitoring, and more. Most of those applications need to compute hundreds or thousands of metrics continuously while coping with very high event input rates. As a consequence, large amounts of state (i.e., moving windows) need to be maintained, very often exceeding the available memory resources. Nonetheless, current event processing platforms have little or no memory management capabilities, hanging or simply crashing when memory is exhausted. In this paper we report our experience in using secondary storage for solving the performance problems of memory-constrained event processing applications. For that, we propose SlideM, a novel buffer management algorithm that exploits the access pattern of sliding windows in order to efficiently handle memory shortages. The proposed algorithm was implemented in a real stream processing engine and validated through an extensive experimental performance evaluation. Results corroborate the efficacy of the approach: the system was able to sustain very high input rates (up to 300,000 events per second) for very large windows (about 30GB) while consuming small amounts of main memory (few kilobytes).

DOI: 10.1145/2479871.2479933

Full text: PDF

[#][]

Systematic Performance Evaluation Based on Tailored Benchmark Applications

Authors:

Christian Weiss (SAP Research)
Dennis Westermann (SAP Research)
Christoph Heger (Karlsruhe Institute for Technology)
Martin Moser (SAP AG)

Abstract:

Performance (i.e., response time, throughput, resource consumption) is a key quality metric of today’s applications as it heavily affects customer satisfaction. SAP strives to identify and fix performance problems before customers face them. Therefore, performance engineering methods are applied in all stages of the software lifecycle. However, especially in the development phase continuous performance evaluations can introduce a lot of overhead for developers which hinders their broad application in practice. In order to evaluate the performance of a certain software artefact (e.g. comparing two design alternatives), a developer has to run measurements that are tailored to the software artefact under test. The use of standard benchmarks would create less overhead, but the information gain is often not sufficient to answer the specific questions of developers. In this industrial paper, we present an approach that enables exhaustive, tailored performance testing with minimal effort for developers. The approach allows to define benchmark applications through a domain-specific model and realizes the transformation of those models to benchmark applications via a generic Benchmark Framework. The application of the approach in the context of the SAP Netweaver Cloud development environment demonstrated that we can efficiently identify performance problems that would not have been detected by our existing performance test infrastructure.

DOI: 10.1145/2479871.2479934

Full text: PDF

[#][]