Tuesday, 28 March 2017

Mission Statement

The mission of SPEC's Research Group (RG) is to promote innovative research in the area of quantitative system evaluation and analysis by serving as a platform for collaborative research efforts fostering the interaction between industry and academia in the field.

The scope of the group includes computer benchmarking, performance evaluation, and experimental system analysis in general, considering both classical performance metrics such as response time, throughput, scalability and efficiency, as well as other non-functional system properties included under the term dependability, e.g., availability, reliability, and security.

The conducted research efforts span the design of metrics for system evaluation as well as the development of methodologies, techniques and tools for measurement, load testing, profiling, workload characterization, dependability and efficiency evaluation of computing systems. 

Current and planned activities of the RG include:

  • Establish and maintain a repository of peer-reviewed tools for quantitative system evaluation and analysis.
  • Establish and supervise Working Groups focused on developing representative application scenarios and workloads, referred to as research benchmarks, for existing or newly emerging technologies and application domains.
  • Review and publish proposed tools and research benchmarks.
  • Publish a regular newsletter as well as peer-reviewed research articles and white papers in the area of quantitative system evaluation and analysis.
  • Establish and maintain a portal for benchmarking-related resources, including benchmarking research bibliography, popular tools, whitepapers, and best practices.
  • Organize conferences and workshops fostering the transfer of knowledge between industry and academia in the areas of computer benchmarking, performance evaluation, and quantitative system evaluation and analysis, in general.
  • Recognize outstanding contributions in the research areas covered by the RG.
  • Publish a journal on benchmarking methodologies and tools.

The charter of the RG is available for download as a PDF file (v3.1).

Research Benchmarks

A major activity of the RG is the establishment and supervision of Working Groups focusing on developing research benchmarks for existing or newly emerging technology domains. Unlike conventional benchmarks, research benchmarks are not intended to serve as benchmarks for direct comparison and marketing of existing products. The goal is rather to provide representative application scenarios, defined at a higher level of abstraction, that can be used as a basis to evaluate early prototypes and research results as well as full-blown implementations in the respective technology domain. Research benchmarks can be defined both for existing technologies as well as for new technologies at the early stages of their inception before full-fledged industrial implementations are pursued.

Research benchmarks are designed to have long-term relevance and representativeness in the respective technology domain and are targeted for use both in academic and industrial research. They may also be used by other SPEC committees outside of the RG as a basis to implement standard benchmarks to measure and compare specific platforms similarly to the way conventional SPEC benchmarks are used. As new platforms emerge, benchmark implementations will be updated, however, the scenarios themselves would be more stable and are thus expected to have a longer life-span than conventional SPEC benchmarks.

  1. Their main goal is to provide a basis for in-depth quantitative analysis and evaluation of early prototypes and research results, as well as full-blown implementations, in academic and industrial research environments.
  2. They are not intended for direct comparison and marketing of existing products, however, they can be used as a basis for building conventional industry-standard benchmarks.
  3. They are applicable to both existing and newly emerging technologies.
  4. Research benchmarks are defined at a higher level of abstraction and thus provide room for a wide range of different implementations.
  5. They are more stable and are expected to have longer life-span and relevance for the considered technology domain.
  6. They would normally be more flexible and customizable to different usage scenarios.
  7. They are intended to provide a range of possible metrics and leave it up to the user to decide how to weigh them based on the goals and scope of his analysis.