Thursday, 14 December 2017

Session 13: Work in Progress and Vision Papers I

Software Contention Aware Queueing Network Model of Three-Tier Web Systems (Work-in-Progress)

Authors:

Shadi Ghaith (University College Dublin)
Miao Wang (University College Dublin)
Philip Perry (University College Dublin)
Liam Murphy (University College Dublin)

Abstract:

Using modelling to predict the performance characteristics of software applications typically uses Queueing Network Models representing the various system hardware resources. Leaving out the software resources, such as the limited number of threads, in such models leads to a reduced prediction accuracy. Accounting for Software Contention is a challenging task as existing techniques to model software components are complex and require deep knowledge of the software architecture. Furthermore, they also require complex measurement processes to obtain the model’s service demands. In addition, solving the resultant model usually require simulation solvers which are often time consuming. In this work, we aim to provide a simpler model for three-tier web software systems which accounts for Software Contention that can be solved by time efficient analytical solvers. We achieve this by expanding the existing “Two-Level Iterative Queuing Modelling of Software Contention” method to handle the number of threads at the Application Server tier and the number of Data Sources at the Database Server tier. This is done in a generic manner to allow for extending the solution to other software components like memory and critical sections. Initial results show that our technique clearly outperforms existing techniques.

DOI:10.1145/2568088.2576760

Full text: PDF

[#][]

Efficient and Accurate Stack Trace Sampling in the Java Hotspot Virtual Machine (Work-in-Progress Paper)

Authors:

Peter Hofer (Johannes Kepler University)
Hanspeter Mössenböck (Johannes Kepler University)

Abstract:

Sampling is a popular approach to collecting data for profiling and monitoring, because it has a small impact on performance and does not modify the observed application. When sampling stack traces, they can be merged into a calling context tree that shows where the application spends its time and where performance problems lie. However, Java VM implementations usually rely on safepoints for sampling stack traces. Safepoints can cause inaccuracies and have a considerable performance impact. We present a new approach that does not use safepoints, but instead relies on the operating system to take snapshots of the stack at arbitrary points. These snapshots are then asynchronously decoded to call traces, which are merged into a calling context tree. We show that we are able to decode over 90% of the snapshots, and that our approach has very small impact on performance even at high sampling rates.

DOI: 10.1145/2568088.2576759

Full text: PDF

[#][]

PowerPerfCenter: A Power and Performance Prediction Tool for Multi-Tier Applications

Authors:

Varsha Apte (IIT Bombay)
Bhavin Doshi (IIT Bombay)

Abstract:

The performance analysis of a server application and the sizing of the hardware required to host it in a data center continue to be pressing issues today. With most server- grade computers now built with “frequency-scaled CPUs” and other such devices, it has become important to answer performance and sizing questions in the presence of such hardware. PowerPerfCenter is an application performance modeling tool that allows specification of devices whose operating speeds can change dynamically. It also estimates power usage by the machines in presence of such devices. Furthermore, it allows specification of a dynamic workload which is required to understand the impact of power management. We validated the performance metrics predicted by PowerPerfCenter against measured ones of an application deployed on a test-bed consisting of frequency-scaled CPUs, and found the match to be good. We also used PowerPerfCenter to show that power savings may not be significant if a device does not have different idle power consumption when configured with different operating speeds.

DOI: 10.1145/2568088.2576758

Full text: PDF

[#][]