Thursday, 14 December 2017

Session 11: Vision and Work-in-Progress Papers

Towards a Standard Event Processing Benchmark

Authors:

Marcelo R. N. Mendes (University of Coimbra)
Pedro Bizarro (University of Coimbra)
Paulo Marques (University of Coimbra)

Abstract:

There has been an increasing interest both in academia and industry for systematic methods for evaluating the performance and scalability of event processing systems. A number of performance results have been disclosed over the last years, but there is still a lack of standardized benchmarks that allow an objective comparison of the different systems. In this paper, we present our work in progress: the BiCEP benchmark suite, a set of workloads, datasets and tools for evaluating different performance aspects of event processing platforms. In particular, we introduce Pairs, the first of the BiCEP benchmarks, aimed at assessing the ability of CEP engines in processing progressively larger volumes of events and simultaneous queries while providing quick answers.

DOI: 10.1145/2479871.2479913

Full text: PDF

[#][]

Towards a Methodology Driven by Relationships of Quality Attributes for QoS-based Analysis

Authors:

Steffen Becker (University of Paderborn)
Lucia Happe (Kapova) (Karlsruhe Institute of Technology)
Raffaela Mirandola (Politecnico di Milano)
Catia Trubiani (University of L'Aquila)

Abstract:

Engineering high quality software is a tough task. In order to know whether a certain quality attribute has been achieved or degraded, it has to be quantified by analysis or measured. However, determining what to quantify and how these quantities are related to each other is the difficult part.

Early analysis of the quality attributes of a software system on the basis of the system’s planned architecture allows informed decisions on design trade-offs. Such decisions can be later validated by measurements on the running system.

In this paper, we revisit software quality attributes. In particular, we introduce a generic taxonomy of quality attributes, the relationship between the attributes is argued, and finally we devise future work leading to an attribute-based methodology for evaluating software architectures. The goal is reasoning about multiple quality attributes of software systems to achieve the ability to quantitatively evaluate and trade-off them.

DOI: 10.1145/2479871.2479914

Full text: PDF

[#][]

Assessing Computer Performance with SToCS

Authors:

Leonardo Piga (University of Campinas)
Gabriel F. T. Gomes (University of Campinas)
Rafael Auler (University of Campinas)
Bruno Rosa (University of Campinas)
Sandro Rigo (University of Campinas)
Edson Borin (University of Campinas)

Abstract:

Several aspects of a computer system cause performance measurements to include random errors. Moreover, these systems are typically composed of a non-trivial combination of individual components that may cause one system to perform better or worse than another depending on the workload. Hence, properly measuring and comparing computer systems performance are non-trivial tasks.

The majority of work published on recent major computer architecture conferences do not report the random errors measured on their experiments. The few remaining authors have been using only confidence intervals or standard deviations to quantify and factor out random errors. Recent publications claim that this approach could still lead to misleading conclusions.

In this work, we reproduce and discuss the results obtained in a previous study. Finally, we propose SToCS, a tool that integrates several statistical frameworks and facilitates the analysis of computer science experiments.

DOI: 10.1145/2479871.2479915

Full text: PDF

[#][]

Towards a Workload Model for Online Social Applications

Authors:

Alexandru-Corneliu Olteanu (University Politehnica of Bucharest)
Alexandru Iosup (Delft University of Technology)
Nicolae Ţãpuş (University Politehnica of Bucharest)

Abstract:

Popular online social applications hosted by social platforms serve, each, millions of interconnected users. Understanding the workloads of these applications is key in improving the management of their performance and costs. In this work, we analyse traces gathered over a period of thirty-one months for hundreds of Facebook applications. We characterize the popularity of applications, which describes how applications attract users, and the evolution pattern, which describes how the number of users changes over the lifetime of an application. We further model both application popularity and evolution, and validate our model statistically, by fitting five probability distributions to empirical data for each of the model variables. Among the results, we find that most applications reach their maximum number of users within a third of their lifetime, and that the lognormal distribution provides the best fit for the popularity distribution.

DOI: 10.1145/2479871.2479916

Full text: PDF

[#][]

A Robust Optimization for Proactive Energy Management in Virtualized Data Centers

Authors:

Ibrahim Takouna (University of Potsdam)
Wesam Dawoud (University of Potsdam)
Kai Sachs (SAP AG)
Christoph Meinel (University of Potsdam)

Abstract:

Energy management has become a significant concern in data centers to reduce operational costs and maintain systems’ reliability. Using virtualization allows server consolidation, which increases server utilization and reduces energy consumption by turning off unused servers. However, server consolidation and turning off servers can cause also consequences if they are not exploited efficiently. For instance, many researchers consider a deterministic demand for capacity planning, but the demand is always subject to uncertainty. This uncertainty is an outcome of the workload prediction and the workload fluctuation. This paper presents a robust optimization for proactive capacity planning. We do not presume that the demand of VMs is deterministic. Thus, we implement a range prediction approach instead of a single point prediction. Then, we implement a robust optimization model exploiting the range-based prediction to determine the number of active servers for each capacity planning period. The results of the simulation show that our approach can mitigate undesirable changes in the power-state of the servers. Additionally, the results indicate an increase in the servers’ availability for hosting new VMs and reliability against a system failure during power-state changes. As future work, we intend to apply our approach to dynamic workload such as a web application. We plan to investigate applying our approach to other resources, where we consider only the CPU demand of VMs. Finally, we compare our approach against the approaches using stochastic optimization.

DOI: 10.1145/2479871.2479917

Full text: PDF

[#][]

A Meta-Model for Performance Modeling of Dynamic Virtualized Network Infrastructures

Authors:

Piotr Rygielski (Karlsruhe Institute of Technology)
Steffen Zschaler (King's College London)
Samuel Kounev (Karlsruhe Institute of Technology)

Abstract:

In this work-in-progress paper, we present a new meta-model designed for the performance modeling of dynamic data center network infrastructures. Our approach models characteristic aspects of Cloud data centers which were not crucial in classical data centers. We present our meta-model and demonstrate its use for performance modeling and analysis through an example, including a transformation into OMNeT++ for performance simulation.

DOI: 10.1145/2479871.2479918

Full text: PDF

[#][]

Performance Modelling of Database Contention using Queueing Petri Nets

Authors:

David Coulden (Imperial College London)
Rasha Osman (Imperial College London)
William J. Knottenbelt (Imperial College London)

Abstract:

Most performance evaluation studies of database systems are high level studies limited by the expressiveness of their modelling formalisms. In this paper, we illustrate the potential of Queueing Petri Nets as a successor of traditionally-adopted modelling formalisms in evaluating the complexities of database systems. This is demonstrated through the construction and analysis of a Queueing Petri Net model of table-level database locking. We show that this model predicts mean response times better than a corresponding Petri net model.

DOI: 10.1145/2479871.2479919

Full text: PDF

[#][]

CloudScale: Scalability Management for Cloud Systems

Authors:

Gunnar Brataas (SINTEF ICT & IDI, NTNU)
Erlend Stav (SINTEF ICT)
Sebastian Lehrig (Universität Paderborn)
Steffen Becker (Universität Paderborn)
Goran Kopčak (Ericsson Nikola Tesla)
Darko Huljenic (Ericsson Nikola Tesla)

Abstract:

This work-in-progress paper introduces the EU FP7 STREP CloudScale. The contribution of this paper is an overall description of CloudScale’s engineering approach for the design and evolution of scalable cloud applications and services. An Electronic Health Record (EHR) system serves as a motivation scenario. The overall CloudScale method describes how CloudScale will identify and gradually solve scalability problems in this existing applications. CloudScale will also enable the modelling of design alternatives and the analysis of their effect on scalability and cost. Best practices for scalability will further guide the design process. The Cloud-Scale method is supported by three integrated tools and a scalability description modelling language. CloudScale will be validated by two case studies.

DOI: 10.1145/2479871.2479920

Full text: PDF

[#][]

A Generic Approach for Architecture-Level Performance Modeling and Prediction of Virtualized Storage Systems

Authors:

Qais Noorshams (Karlsruhe Institute of Technology)
Andreas Rentschler (Karlsruhe Institute of Technology)
Samuel Kounev (Karlsruhe Institute of Technology)
Ralf Reussner (Karlsruhe Institute of Technology)

Abstract:

Virtualized environments introduce an additional abstraction layer on top of physical resources to enable the collective resource usage by multiple systems. With the rise of I/O-intensive applications, however, the virtualized storage of such shared environments can quickly become a bottleneck and lead to performance and scalability issues. The latter can be avoided through careful design of the application architecture and systematic capacity planning throughout the system life cycle. In current practice, however, virtualized storage and its performance-influencing design decisions are often neglected or treated as a black-box. In this work-in-progress paper, we propose a generic approach for performance modeling and prediction of virtualized storage systems at the software architecture level. More specifically, we propose two performance modeling approaches of virtualized systems. Furthermore, we propose two approaches how the performance models can be combined with architecture-level performance models. The goal is to cope with the increasing complexity of virtualized storage systems with the benefit of intuitive software architecture-level models.

DOI: 10.1145/2479871.2479921

Full text: PDF

[#][]

Adaptive Deployment in Ad-Hoc Systems Using Emergent Component Ensembles: Vision Paper

Authors:

Lubomír Bulej (Charles University in Prague & Academy of Sciences of the Czech Republic)
Tomáš Bureš (Charles University in Prague & Academy of Sciences of the Czech Republic)
Vojtěch Horký (Charles University in Prague)
Jaroslav Keznikl (Charles University in Prague & Academy of Sciences of the Czech Republic)

Abstract:

Mobile cloud computing in the context of ad-hoc clouds brings new challenges when offloading computation from mobile devices. The management of application deployment needs to ensure that the offloading provides users with the expected benefits, but it suddenly needs to cope with a highly dynamic environment which lacks a central authority and in which computational nodes appear and disappear.

We propose an approach to the management of ad-hoc systems in such dynamic environment using component ensembles that connect mobile devices with more powerful computation nodes. Our approach aims to address the challenges of scalability and robustness of such systems without the need for central authority, relying instead on simple patterns that lead to reasonable adaptation decisions based on limited and imprecise information.

DOI: 10.1145/2479871.2479922

Full text: PDF

[#][]