Wednesday, 13 December 2017

Session 9: Performance and Power

Green Domino Incentives: Impact of Energy-aware Adaptive Link Rate Policies in Routers

Authors:

Cyriac James (University of Calgary)
Niklas Carlsson (Linkoping University)

Abstract:

To reduce energy consumption of lightly loaded routers, operators are increasingly incentivized to use Adaptive Link Rate (ALR) policies and techniques. These techniques typically save energy by adapting link service rates or by identifying opportune times to put interfaces into low-power sleep/idle modes. In this paper, we present a trace-based analysis of the impact that a router implementing these techniques has on the neighboring routers. We show that policies adapting the service rate at larger time scales, either by changing the service rate of the link interface itself or by changing which redundant heterogeneous link is active, typically have large positive effects on neighboring routers, with the downstream routers being able to achieve up-to 30% additional energy savings due to the upstream routers implementing ALR policies. Policies that save energy by temporarily placing the interface in a low-power sleep/idle mode, typically has smaller, but positive, impact on neighboring routers. Best are hybrid policies that use a combination of these two techniques. The hybrid policies consistently achieve the biggest energy savings, and have positive cascading effects on surrounding routers. Our results show that implementation of ALR policies can contribute to large-scale positive domino incentive effects, as they further increase the potential energy savings seen by those neighboring routers that consider implementing ALR techniques, while satisfying performance guarantees on the routers themselves.

DOI: 10.1145/2668930.2688045

Full text: PDF

[#][]

Analysis of the Influences on Server Power Consumption and Energy Efficiency for CPU-Intensive Workloads

Authors:

Jóakim v. Kistowski (University of Würzburg)
Hansfried Block (Fujitsu Technology Solutions GmbH)
John Beckett (Dell Inc.)
Klaus-Dieter Lange (Hewlett-Packard Company)
Jeremy A. Arnold (IBM Corporation)
Samuel Kounev (University of Würzburg)

Abstract:

Energy efficiency of servers has become a significant research topic over the last years, as server energy consumption varies depending on multiple factors, such as server utilization and workload type. Server energy analysis and estimation must take all relevant factors into account to ensure reliable estimates and conclusions. Thorough system analysis requires benchmarks capable of testing different system resources at different load levels using multiple workload types. Server energy estimation approaches, on the other hand, require knowledge about the interactions of these factors for the creation of accurate power models. Common approaches to energy-aware workload classification categorize workloads depending on the resource types used by the different workloads. However, they rarely take into account differences in workloads targeting the same resources. Industrial energy-efficiency benchmarks typically do not evaluate the system’s energy consumption at different resource load levels, and they only provide data for system analysis at maximum system load. In this paper, we benchmark multiple server configurations using the CPU worklets included in SPEC’s Server Efficiency Rating Tool (SERT). We evaluate the impact of load levels and different CPU workloads on power consumption and energy efficiency. We analyze how functions approximating the measured power consumption differ over multiple server configurations and architectures. We show that workloads targeting the same resource can differ significantly in their power draw and energy efficiency. The power consumption of a given workload type varies depending on utilization, hardware and software configuration. The power consumption of CPU-intensive workloads does not scale uniformly with increased load, nor do hardware or software configuration changes affect it in a uniform manner.

DOI: 10.1145/2668930.2688057

Full text: PDF

[#][]

Measuring Server Energy Proportionality

Authors:

Chung-Hsing Hsu (Oak Ridge National Laboratory)
Stephen W. Poole (Oak Ridge National Laboratory)

Abstract:

In performance engineering, metrics are often used to track the progress over time. Concerning the potential bias of using a single metric, performance engineers tend to use multiple metrics for reasoning. However, this approach has its own challenges. In this work we study one of the challenges in the context of analyzing trends in server energy proportionality. We examine a wide range of metrics for measuring energy proportionality, trying to determine which metrics are essential and which are redundant. We do this by comparing the trend curves of the metrics for the published results of the SPECpower ssj2008 benchmark. While the context is specific, the proposed analysis method is quite general. We hope that this method would help us do performance engineering more effectively.

DOI: 10.1145/2668930.2688049

Full text: PDF

[#][]

Slow Down or Halt: Saving the Optimal Energy for Scalable HPC Systems

Authors:

Li Tan (University of California, Riverside)
Zizhong Chen (University of California, Riverside)

Abstract:

The presence of pervasive slack provides ample opportunities for achieving energy efficiency for HPC systems nowadays. Regardless of communication slack, classic energy saving approaches for saving energy during the slack otherwise include race-to-halt and CP-aware slack reclamation, which reply on power scaling techniques to adjust processor power states judiciously during the slack. Existing efforts demonstrate CP-aware slack reclamation is superior to race-to-halt in energy saving capability. In this paper, we formally model our observation that the energy saving capability gap between the two approaches is significantly narrowed down on today’s processors, given that state-of-the-art CMOS technologies allow insignificant variation of supply voltage as operating frequency of a processor scales. Experimental results on a large-scale power-aware cluster validate our findings.

DOI: 10.1145/2668930.2695530

Full text: PDF

[#][]