Tuesday, 12 December 2017

Session 11: Reports of Experience and Test

System Performance Analyses through Object-oriented Fault and Coupling Prisms

Authors:

Alessandro Murgia (University of Antwerp)
Roberto Tonelli (University of Cagliari)
Michele Marchesi (University of Cagliari)
Giulio Concas (University of Cagliari)
Steve Counsell (Brunel University)
Stephen Swift (Brunel University)

Abstract:

A fundamental aspect of a system’s performance over time is the number of faults it generates. The relationship between the software engineering concept of ‘coupling’ (i.e., the degree of inter-connectedness of a system’s components) and faults is still a research question attracting attention and a relationship with strong implications for performance; excessive coupling is generally acknowledged to contribute to fault-proneness. In this paper, we explore the relationship between faults and coupling. Two releases from each of three open-source Eclipse projects (six releases in total) were used as an empirical basis and coupling and fault data extracted from those systems. A contrasting coupling profile between fault-free and fault-prone classes was observed and this result was statistically supported. Object-oriented (OO) classes with low values of fan-in (incoming coupling) and fan-out (outgoing coupling) appeared to support fault-free classes, while classes with high fan-out supported relatively fault-prone classes. We also considered size as an influence on fault-proneness. The study thus emphasizes the importance of minimizing coupling where possible (and particularly that of fan-out); failing to control coupling may store up problems for later in a system’s life; equally, controlling class size should be a concomitant goal.

DOI: 10.1145/2568088.2568089

Full text: PDF

[#][]

Run-Time Performance Optimization of a BigData Query Language

Authors:

Yanbin Liu (IBM T.J. Watson Research Center)
Parijat Dube (IBM T.J. Watson Research Center)
Scott C. Gray (IBM T.J. Watson Research Center)

Abstract:

JAQL is a query language for large-scale data that connects BigData analytics and MapReduce framework together. Also an IBM product, JAQL’s performance is critical for IBM InfoSphere BigInsights, a BigData analytics platform. In this paper, we report our work on improving JAQL performance from multiple perspectives. We explore the parallelism of JAQL, profile JAQL for performance analysis, identify I/O as the dominant performance bottleneck, and improve JAQL performance with an emphasis on reducing I/O data size and increasing (de)serialization efficiency. With TPCH benchmark on a simple Hadoop cluster, we report up to 2x performance improvements in JAQL with our optimization fixes.

DOI: 10.1145/2568088.2576800

Full text: PDF

[#][]

Model-driven Engineering in Practice: Integrated Performance Decision Support for Process-centric Business Impact Analysis

Authors:

David Redlich (Lancaster University)
Ulrich Winkler (Queen's University)
Thomas Molka (University Manchester)
Wasif Gilani (SAP Research Centre)

Abstract:

Modern businesses and business processes depend on an in- creasingly interconnected set of resources, which can be af- fected by external and internal factors at any time. Threats like natural disasters, terrorism, or even power blackouts potentially cause disruptions in an organisation’s resource infrastructure which in turn negatively impacts the performance of dependent business processes. In order to assist business analysts dealing with this ever increasing complexity of interdependent business structures a model-driven workbench named Model-Driven Business Impact Analysis (MDBIA) has been developed with the purpose of predicting consequences on the business process level for an organisation in case of disruptions. An already existing Model-Driven Performance Engineering (MDPE) workbench, which originally provided process-centric performance decision support, has been adapted and extended to meet the additional requirements of business impact analysis. The fundamental concepts of the resulting MDBIA workbench, which include the introduction of the applied key models and transformation chain, are presented and evaluated in this paper.

DOI: 10.1145/2568088.2576797

Full text: PDF

[#][]

Continuous Validation of Load Test Suites

Authors:

Mark D. Syer (Queen's University)
Zhen Ming Jiang (York University)
Meiyappan Nagappan (Queen's University)
Ahmed E. Hassan (Queen's University)
Mohamed Nasser (BlackBerry)
Parminder Flora (BlackBerry)

Abstract:

Ultra-Large-Scale (ULS) systems face continuously evolving field workloads in terms of activated/disabled feature sets, varying usage patterns and changing deployment configurations. These evolving workloads often have a large impact on the performance of a ULS system. Hence, continuous load testing is critical to ensuring the error-free operation of such systems. A common challenge facing performance analysts is to validate if a load test closely resembles the current field workloads. Such validation may be performed by comparing execution logs from the load test and the field. However, the size and unstructured nature of execution logs makes such a comparison unfeasible without automated support. In this paper, we propose an automated approach to validate whether a load test resembles the field workload and, if not, determines how they differ by compare execution logs from a load test and the field. Performance analysts can then update their load test cases to eliminate such differences, hence creating more realistic load test cases. We perform three case studies on two large systems: one open-source system and one enterprise system. Our approach identifies differences between load tests and the field with a precision of ?75% compared to only ?16% for the state-of-the-practice.

DOI: 10.1145/2568088.2568101

Full text: PDF

[#][]