Wednesday, 13 December 2017

Session 6: Large-scale and Distributed Systems

MT-WAVE: Profiling Multi-Tier Web Applications

Authors:

Anthony Arkles (University of Saskatchewan)
Dwight Makaroff (University of Saskatchewan)

Abstract:

Modern web applications consist of many distinct services that collaborate to provide the full application functionality. To improve application performance, developers need to be able to identify the root cause of performance problems; identifying and fixing performance problems in these distributed, heterogeneous applications can be very difficult. As web applications become more complicated, the number of systems involved will continue to grow and full-system performance tuning will become more difficult.

We postulate that multi-tier profiling, starting at the web browser, is the appropriate way to solve this problem. Instrumenting from the web browser, as the user experiences it, ensures that we can tell what each service in the application is contributing to overall page-load time; thus, each tier must provide instrumentation data that developers can use to quickly identify the root cause of performance problems.

We have built MT-WAVE, a system that integrates with the different tiers of a web application (including a browser extension) and collects light-weight instrumentation to a central location via X-Trace facilities. The collected data is presented with our visualization system that provides varying levels of detail.

To validate our approach, we performed case studies of two applications, both showing performance insight. In particular, we identified and fixed a significant and unintuitive bottleneck in an open-source project management application and verified caching behaviour in a cloud-hosted commercial product. While specific technologies are used in our case study, we believe that most web technologies in common use today would require straightforward modifications to be able to utilize MT-WAVE tracing facilities.

This tool is designed to be used by application developers and system administrators while testing new software, or after deployment when it becomes clear that existing performance is not meeting user needs.

DOI: 10.1145/1958746.1958783

Full text: PDF

[#][]

A Capacity Planning Process for Performance Assurance of Component-Based Distributed Systems

Authors:

Nilabja Roy (Vanderbilt University)
Abhishek Dubey (Vanderbilt University)
Aniruddha Gokhale (Vanderbilt University)
Larry Dowdy (Vanderbilt University)

Abstract:

For service providers of multi-tiered component-based applications, such as web portals, assuring high performance and availability to their customers without impacting revenue requires effective and careful capacity planning that aims at minimizing the number of resources, and utilizing them efficiently while simultaneously supporting a large customer base and meeting their service level agreements. This paper presents a novel, hybrid capacity planning process that results from a systematic blending of 1) analytical modeling, where traditional modeling techniques are enhanced to overcome their limitations in providing accurate performance estimates; 2) profile-based techniques, which determine performance profiles of individual software components for use in resource allocation and balancing resource usage; and 3) allocation heuristics that determine minimum number of resources to allocate software components. Our results illustrate that using our technique, performance (i.e., bounded response time) can be assured while reducing operating costs by using 25% less resources and increasing revenues by handling 20% more clients compared to traditional approaches.

DOI: 10.1145/1958746.1958784

Full text: PDF

[#][]

A New Business Model for Massively Multiplayer Online Games

Authors:

Vlad Nae (University of Innsbruck)
Radu Prodan (University of Innsbruck)
Alexandru Iosup (Delft University of Technology)
Thomas Fahringer (University of Innsbruck)

Abstract:

Today, highly successful Massively Multiplayer Online Games (MMOGs) have millions of registered users and hundreds of thousands of active concurrent players. To sustain their highly variable load, game operators over-provision a large static infrastructure capable of sustaining the game peak load, even though a large portion of the resources is unused most of the time. This inefficient resource utilisation has negative economic impacts by preventing any but the largest hosting centres from joining the market and dramatically increases prices.

In this paper, we propose a new business model of hosting and operating MMOGs based on Cloud computing principles involving four actors: resource provider, game operator, game provider, and client. Our model efficiently provisions on-demand virtualised resources to game sessions based on their dynamic client load, which dramatically decreases prices and gives small and medium enterprises the opportunity of joining the market through zero initial investment.

We validate our new model and its underlying business relationships through trace-based simulations utilising six months worth of monitoring data from a real-life MMOG using emulated resources from 16 of the largest Cloud resource providers currently on the market. We demonstrate that our model can operate state-of-the-art MMOGs with an average monthly gross profit of nearly $6 million excluding game purchase prices, overheads and taxation, while being able to maintain and control the QoS offered to all clients. Finally, we show how our approach is capable of operating next generation very highly interactive MMOGs with a small increase of 5.8% in the subscription price.

DOI: 10.1145/1958746.1958785

Full text: PDF

[#][]

MassConf: Automatic Configuration Tuning By Leveraging User Community Information

Authors:

Wei Zheng (Rutgers University)
Ricardo Bianchini (Rutgers University)
Thu D. Nguyen (Rutgers University)

Abstract:

Configuring modern enterprise software can be extremely difficult because their behaviors often depend on large numbers of configuration parameters. Software vendors can simplify the configuration process for new users by collecting and using configuration information from existing users. In particular, we observe that (1) a 'good' configuration may work well for many different users, and (2) multiple configurations may work well for each user. We leverage these observations to design MassConf, a system that collects and uses existing configurations to automatically configure new software installations. Our evaluations with a case study confirm our observations and show that MassConf successfully reaches the targets of many more new installations than an existing efficient optimization algorithm.

DOI: 10.1145/1958746.1958786

Full text: PDF

[#][]

Global Cost Diversity Aware Dispatch Algorithm for Heterogeneous Data Centers

Authors:

Ananth Narayan Sankaranarayanan (Simon Fraser University)
Somsubhra Sharangi (Simon Fraser University)
Alexandra Fedorova (Simon Fraser University)

Abstract:

Large, Internet based companies service user requests from multiple data centers located across the globe. These data centers often house a heterogeneous computing infrastructure and draw electricity from the local electricity market. Reducing the electricity costs of operating these data centers is a challenging problem, and in this work, we propose a novel solution which exploits both the data center heterogeneity and global electricity market diversity to reduce data center operating cost. We evaluate our solution in our test-bed that simulates a heterogeneous data center, using real-world request workload and real-world electricity prices. We show that our strategies achieve cost and energy saving of atleast 21% over a naïve load balancing scheme that distributes requests evenly across data centers, and outperform existing solutions which either do not exploit the electricity market diversity or do not exploit data center hardware diversity.

DOI: 10.1145/1958746.1958787

Full text: PDF

[#][]