Thursday, 14 December 2017

Session 16: Poster and Demonstration Papers

On Time-Average Limits in Deterministic and Stochastic Petri Nets

Authors:

Tomᚹ Brázdil (Masaryk University)
L'ubošš Korenčiak (Masaryk University)
Jan Krčál (Masaryk University)
Jan Křetínský (Masaryk University)
Vojtěch Řehák (Masaryk University)

Abstract:

In this poster paper, we study performance of systems modeled by deterministic and stochastic Petri nets (DSPN). As a performance measure, we consider long-run average time spent in a set of markings. Even though this measure often appears in DSPN literature, its existence has never been considered. We provide a DSPN model of a simple communication protocol in which the long-run average time spent in a fixed marking is not well-defined due to a highly unstable behavior of the model. Further, we introduce a syntactical restriction on DSPN which preserves most of the modeling power yet guarantees existence of the long-run average.

DOI: 10.1145/2479871.2479936

Full text: PDF

[#][]

Model-Based Performance Testing in the Cloud Using the MBPeT Tool

Authors:

Fredrik Abbors (Åbo Akademi University)
Tanwir Ahmad (Åbo Akademi University)
Dragos Truscan (Åbo Akademi University)
Ivan Porres (Åbo Akademi University)

Abstract:

We present an approach for performance testing of software services. We use Probabilistic Timed Automata to model the workload of the system, by describing how different user types interact with the system. We use these models to generate load in real-time and we measure different performance indicators. An in-house developed tool, MBPeT, is used to support our approach. We exemplify with an auction web service case study and show how performance information about the system under test can be collected.

DOI: 10.1145/2479871.2479937

Full text: PDF

[#][]

SPECsip Infrastructure and Application Benchmarks

Authors:

Yao-Min Chen (Oracle)
Azeem Jiava (Oracle)
Madhava Dass (Oracle)
Victoria Roxas (Intel)
Sam Warner (Intel)
Gary DeVal (IBM)
Wei Chen (Voxeo Labs)
Kenny Lee (Voxeo Labs)

Abstract:

We describe two SIP benchmarks developed by SPEC. SPECsip_Infrastructure2011 is a SIP Proxy benchmark with realistic user behavior modeling. The second benchmark, not yet officially named, is for a JSR-289 conforming application server. We describe the design, usage and the underlying methodologies for both benchmarks.

DOI: 10.1145/2479871.2479938

Full text: PDF

[#][]

MockTell: Exploring Challenges of User Emulation in Interactive Voice Response Testing

Authors:

Siddhartha Asthana (Indraprastha Institute of Information Technology, Delhi)
Pushpendra Singh (Indraprastha Institute of Information Technology, Delhi)
Amarjeet Singh (Indraprastha Institute of Information Technology, Delhi)

Abstract:

Increasing use of telephone devices has made the Interactive Voice Response (IVR), a technology for accessing information over phone, popular among the commercial organizations. IVR systems are used for critical applications like flight reservation, tele-banking, etc. which requires to have well tested IVR systems. Manual testing of an IVR application requires dialing number, listening and responding to voice prompts through key-press or speech.

Automating the tests for IVR applications requires mimicking the user behavior. We present MockTell, a generic tool for call emulation with ability to mimic user behavior. MockTell uses data generated from real world calls for call emulation that helps in optimizing and evaluating the performance of IVR applications. MockTell also allows simulation of calls to provide further testing of system.

DOI: 10.1145/2479871.2479939

Full text: PDF

[#][]

Introduction to Dynamic Program Analysis with DiSL

Authors:

Lukᚹ Marek (Charles University)
Yudi Zheng (University of Lugano)
Danilo Ansaloni (University of Lugano)
Lubomír Bulej (Charles University)
Aibek Sarimbekov (University of Lugano)
Walter Binder (University of Lugano)
Zhengwei Qi (Shanghai Jiao Tong University)

Abstract:

DiSL is a new domain-specific language for bytecode instrumentation with complete bytecode coverage. It reconciles expressiveness and efficiency of low-level bytecode manipulation libraries with a convenient, high-level programming model inspired by aspect-oriented programming. This paper summarizes the language features of DiSL and gives a brief overview of several dynamic program analysis tools that were ported to DiSL. DiSL is available as open-source under the Apache 2.0 license.

DOI: 10.1145/2479871.2479940

Full text: PDF

[#][]

FINCoS: Benchmark Tools for Event Processing Systems

Authors:

Marcelo R. N. Mendes (University of Coimbra)
Pedro Bizarro (University of Coimbra)
Paulo Marques (University of Coimbra)

Abstract:

FINCoS is a set of benchmarking tools for load generation and performance measuring of event processing systems. It leverages the development of novel benchmarks by allowing researchers to create synthetic workloads, and enables users of the technology to evaluate candidate solutions using their own real datasets. An extensible set of adapters allows the framework to communicate with different CEP engines, and its architecture permits to distribute load generation across multiple nodes. In this paper we briefly review FINCoS, introducing its main characteristics and features, and discussing how it measures the performance of event processing platforms.

DOI: 10.1145/2479871.2479941

Full text: PDF

[#][]