Wednesday, 13 December 2017

Session 5: Software Performance Modeling

Decision Support via Automated Metric Comparison for the Palladio-based Performance Blame Analysis

Authors:

Frank Brüseke (University of Paderborn)
Gregor Engels (University of Paderborn)
Steffen Becker (University of Paderborn)

Abstract:

When developing component-based systems, we incorporate third-party black-box components. For each component, performance contracts have been specified by their developers. If errors occur when testing the system built from these components, it is very important to find out whether components violate their performance contracts or whether the composition itself is faulty. This task is called performance blame analysis. In our previous work we presented a performance blame analysis approach that blames components based on a comparison of response time values from the failed test case to expected values derived from the performance contract. In that approach, the system architect needs to manually assess if the test data series shows faster or slower response times than the data derived from the contract. This is laborious as the system architect has to do this for each component operation. In this paper we present an automated comparison of each pair of data series as decision support. In contrast to our work, other approaches do not achieve fully automated decision support, because they do not incorporate sophisticated contracts. We exemplify our performance blame analysis including the automated decision support using the “Common Component Modeling Example” (CoCoME) benchmark.

DOI: 10.1145/2479871.2479886

Full text: PDF

[#][]

Propagation of Incremental Changes to Performance Model due to SOA Design Pattern Application

Authors:

Nariman Mani (Carleton University)
Dorina C. Petriu (Carleton University)
Murray Woodside (Carleton University)

Abstract:

Design patterns for Service Oriented Architecture (SOA) provide solutions to architectural, design and implementation problems, involving software models in different layers of a SOA design. For performance analysis, a performance model can be generated from the SOA design and used to predict its performance. The impact of the design patterns is also reflected in the performance model. It is helpful to be able to trace the causality from the design pattern to its predicted performance impact. This paper describes a technique for automatically refactoring a SOA design model by applying a design pattern and for propagating the incremental changes to its LQN performance model. A SOA design model is expressed in UML extended with two standard profiles: SoaML for expressing SOA solutions and MARTE for performance annotations. The SOA design pattern is specified using a Role Based Modeling Language (RBML) and their application is automated using QVT-O. Automated incremental transformations are explored and evaluated for effectiveness on a case study example.

DOI: 10.1145/2479871.2479887

Full text: PDF

[#][]

Rapid Performance Modeling by Transforming Use Case Maps to Palladio Component Models

Authors:

Christian Vogel (Karlsruhe Institute of Technology)
Heiko Koziolek (ABB Corporate Research)
Thomas Goldschmidt (ABB Corporate Research)
Erik Burger (Karlsruhe Institute of Technology)

Abstract:

Complex information flows in the domain of industrial software systems complicate the creation of performance models to validate the challenging performance requirements. Performance models using annotated UML diagrams or mathematical notations are difficult to discuss with stakeholders from the industrial automation domain, who often have a limited software engineering background. We introduce a novel model transformation from Use Case Maps (UCM) to the Palladio Component Model (PCM), which enables performance modeling based on an intuitive notation for complex information flows. The resulting models can be solved using existing simulators or analytical solvers. We validated the correctness of the transformation with three case study models, and performed a user study. The results showed a performance prediction deviation of less than 10 percent compared to a reference model in most cases.

DOI: 10.1145/2479871.2479888

Full text: PDF

[#][]

Non-Markovian Analysis for Model Driven Engineering of Real-Time Software

Authors:

Laura Carnevali (Università di Firenze)
Marco Paolieri (Università di Firenze)
Alessandro Santoni (Università di Firenze)
Enrico Vicario (Università di Firenze)

Abstract:

Quantitative evaluation of models with stochastic timings can decisively support schedulability analysis and performance engineering of real-time concurrent systems. These tasks require modeling formalisms and solution techniques that can encompass stochastic temporal parameters firmly constrained within a bounded support, thus breaking the limits of Markovian approaches. The problem is further exacerbated by the need to represent suspension of timers, which results from common patterns of real-time programming. This poses relevant challenges both in the theoretical development of non-Markovian solution techniques and in their practical integration within a viable tailoring of industrial processes.

We address both issues by extending a method for transient analysis of non-Markovian models to encompass suspension of timers. The solution technique addresses models that include timers with bounded and deterministic support, which are essential to represent synchronous task releases, timeouts, offsets, jitters, and computations constrained by a Best Case Execution Time (BCET) and a Worst Case Execution Time (WCET). As a notable trait, the theory of analysis is amenable to the integration within a Model Driven Development (MDD) approach, providing specific evaluation capabilities in support of performance engineering without disrupting the flow of design and documentation of the consolidated practice.

DOI: 10.1145/2479871.2479889

Full text: PDF

[#][]

Scalability Testing of MS Lync Services: Towards Optimal Provisioning of Virtualised Hardware

Authors:

Knut Helge Rygg (IDI, NTNU)
Gunnar Brataas (SINTEF ICT and IDI, NTNU)
Geir Millstein (Telenor GID)
Terje Molle (Telenor GID)

Abstract:

A method for scalability testing of the Microsoft Lync 2010 communication system is presented, exploring the relation between system size and system load. The method can be used for optimal provisioning, balancing user Quality of Experience (QoE) with equipment volume and energy consumption. Observing a standard edition of Lync, on a virtualised platform using the VMware hypervisor, the method indicated linear scalability. QoE was mainly limited by the Mean Opinion Score (MOS). This MOS limit corresponded to a Lync front end server utilisation of about 60%.

DOI: 10.1145/2479871.2479890

Full text: PDF

[#][]