Wednesday, 13 December 2017

Session 13: Tutorials

The CloudScale Method for Software Scalability, Elasticity, and Efficiency Engineering: A Tutorial

Authors:

Sebastian Lehrig (Chemnitz University of Technology)
Steffen Becker (Chemnitz University of Technology)

Abstract:

In cloud computing, software engineers design systems for virtually unlimited resources that cloud providers account on a pay-per-use basis. Elasticity management systems provision these resource autonomously to deal with changing workloads. Such workloads call for new objective metrics allowing engineers to quantify quality properties like scalability, elasticity, and efficiency. However, software engineers currently lack engineering methods that aid them in engineering their software regarding such properties. Therefore, the CloudScale project developed tools for such engineering tasks. These tools cover reverse engineering of architectural models from source code, editors for manual design/adaption of such models, as well as tools for the analysis of modeled and operating software regarding scalability, elasticity, and efficiency. All tools are interconnected via ScaleDL, a common architectural language, and the Cloud-Scale Method that leads through the engineering process. In this tutorial, we execute our method step-by-step such that every tool and ScaleDL are briefly introduced.

DOI: 10.1145/2668930.2688818

Full text: PDF

[#][]

How to Build a Benchmark

Authors:

Jóakim v. Kistowski (University of Würzburg)
Jeremy A. Arnold (IBM Corporation)
Karl Huppler (independent)
Klaus-Dieter Lange (Hewlett-Packard Company)
John L. Henning (Oracle)
Paul Cao (Hewlett-Packard Company)

Abstract:

Standardized benchmarks have become widely accepted tools for the comparison of products and evaluation of methodologies. These benchmarks are created by consortia like SPEC and TPC under confidentiality agreements which provide little opportunity for outside observers to get a look at the processes and concerns that are prevalent in benchmark development. This paper introduces the primary concerns of benchmark development from the perspectives of SPEC and TPC committees. We provide a benchmark definition, outline the types of benchmarks, and explain the characteristics of a good benchmark. We focus on the characteristics important for a standardized benchmark, as created by the SPEC and TPC consortia. To this end, we specify the primary criteria to be employed for benchmark design and workload selection. We use multiple standardized benchmarks as examples to demonstrate how these criteria are ensured.

DOI: 10.1145/2668930.2688819

Full text: PDF

[#][]

DOs and DON'Ts of Conducting Performance Measurements in Java

Authors:

Vojtěch Horký (Charles University)
Peter Libič (Charles University)
Antonín Steinhauser (Charles University)
Petr Tůma (Charles University)

Abstract:

The tutorial aims at practitioners – researchers or developers – who need to execute small scale performance experiments in Java. The goal is to provide the attendees with a compact overview of some of the issues that can hinder the experiment or mislead the evaluation, and discuss the methods and tools that can help avoid such issues. The tutorial will examine multiple elements of the software execution stack that impact performance, including common virtual machine mechanisms (just-in-time compilation and garbage collection together with associated runtime adaptation), some operating system features (timers) and hardware (memory) – although the focus will be on Java, some of the take away points should apply even in a more general performance experiment context.

DOI: 10.1145/2668930.2688820

Full text: PDF

[#][]

Hybrid Machine Learning/Analytical Models for Performance Prediction: A Tutorial

Authors:

Diego Didona (Universidade de Lisboa)
Paolo Romano (Universidade de Lisboa)

Abstract:

Classical approaches to performance prediction of computer systems rely on two, typically antithetic, techniques: Machine Learning (ML) and Analytical Modeling (AM). ML undertakes a black-box approach, which typically achieves very good accuracy in regions of the features’ space that have been sufficiently explored during the training process, but that has very weak extrapolation power (i.e., poor accuracy in regions for which none, or too few samples are known). Conversely, AM relies on a white-box approach, whose key advantage is that it requires no or minimal training, hence supporting prompt instantiation of the target system’s performance model. However, to ensure their tractability, AM-based performance predictors typically rely on simplifying assumptions. Consequently, AM’s accuracy is challenged in scenarios not matching such assumptions. This tutorial describes techniques that exploit AM and ML in synergy in order to get the best of the two worlds. It surveys several such hybrid techniques and presents use cases spanning a wide range of application domains.

DOI: 10.1145/2668930.2688823

Full text: PDF

[#][]