Tuesday, 12 December 2017

Session 5: Cloud Performance

Position Paper: Cloud-based Performance Testing: Issues and Challenges

Authors:

Junzan Zhou (Zhejiang University)
Shanping Li (Zhejiang University)
Zhen Zhang (Zhejiang University)
Zhen Ye (Zhejiang University)

Abstract:

Conducting performance testing is essential to evaluate system performance. With the emergence of cloud computing, applying cloud resources for large-scale performance testing become very attractive. Many organizations have applied cloud-based performance testing in realistic projects. Cloud computing brings many benefits for performance testing, while we also have to face many new problems such as performance variation of cloud platform and security problems. In this overview, we discuss the differences between traditional and cloud-based performance testing. We investigate the state-of-art of cloud-based performance testing. We address the key issues with relevant challenges. For some of the issues, we formalize the problems and give our initial idea. We focus on the quality of workload generation and present our experimental results to validate the existence and degree of the challenges. We think that it is beneficial to apply cloud-based performance testing in many cases.

DOI: 10.1145/2462307.2462321

Full text: PDF

[#][]

Position Paper: Cloud System Deployment and Performance Evaluation Tools for Distributed Databases

Authors:

Markus Klems (Karlsruhe Institute of Technology)
Hoàng Anh Lê (Karlsruhe Institute of Technology)

Abstract:

Creating system setups for controlled performance evaluation experiments of distributed systems is time-consuming and expensive. Re-creating experiment setups and reproducing experimental results that have been published by other researchers is even more challenging. In this paper, we present an experiment automation approach for evaluating distributed systems in compute cloud environments. We propose three concepts which should guide the design of experiment automation tools: (1) capture experiment plans in software modules, (2) run experiments in a publicly accessible cloud-based Elastic Lab, and (3) collaborate on experiments in an open, distributed collaboration system. We developed two tools which implement these basic concepts and discuss challenges and lessons learned during our implementation. An initial exemplary use case with Apache Cassandra on top of Amazon EC2 provides a first insight into the types of performance and scalability experiments enabled by our tools.

DOI: 10.1145/2462307.2462322

Full text: PDF

[#][]