Mowgli

Mowgli provides a comprehensive evaluation framework for cloud-hosted DBMS that supports the evaluation objectives performance, scalability, elasticity and availability. Mowgli fully automates the DBMS evaluation process for each evaluation objective, i.e. allocating cloud resources, deploying the DBMS on the allocated resources, executing the workloads, executing runtime adaptations (elasticity objective) and injecting cloud resource failures (availability objective), releasing the cloud resources and processing the objective-specific metrics. Evaluations are defined in reusable JSON-based definitions that enables portable and reproducible evaluations.

Mowgli builds upon a distributed architecture that includes multiple software components to abstract cloud resource allocation, application deployment, workload execution and runtime adaptions.

These components communicate via REST-interfaces that are defined by the Swagger OpenAPI specification. Each component is encapsulated as docker container and the framework is deployed via a single docker-compose file. Mowgli provides a basic web-based user interface as well as REST-API for easy integration in other services.

Mowgli supports evaluations on AWS EC2 and OpenStack resources. It supports multiple DBMS, such as Apache Cassandra, CockraochDB or MongoDB and established DBMS benchmarks such as YCSB or TPC-C. Its loosely coupled architecture also eases the integration of additional technologies. More details can be found in the documentation and the associated publications.

Mowgli is currently revised and further developed at Ulm University in the scope of the spin-off benchANT that offers Benchmarking as a Service. If you are interested in the topic, want to provide feedback, add requirements, and discuss, just get in touch.

Requirements

  • Docker >= 18.09.3
  • Docker-compose >= 1.23.2
  • Openstack/EC2 credentials

Downloads

Website

https://www.uni-ulm.de/en/in/omi/forschung/ergebnisse/mowgli/

Contributors

  • Daniel Seybold
  • Jörg Domaschka
  • Simon Volpert

Maintainers

  • Daniel Seybold, daniel.seybold(at)uni-ulm.de
    Institute of Information Resource Management, Ulm University
    Ulm, Germany

Version

0.2

License

Apache License 2.0

Related publications and projects

  • Daniel Seybold, Moritz Keppler, Daniel Gründler, and Jörg Domaschka "Mowgli: Finding Your Way in the DBMS Jungle". In Proceedings of the 2019 ACM/SPEC International Conference on Performance Engineering (ICPE '19). Association for Computing Machinery, New York, NY, USA, 321–332. 2019. DOI 10.1145/3297663.3310303
  • Daniel Seybold, Simon Volpert, Stefan Wesner, André Bauer, Nikolas Herbst and Jörg Domaschka "Kaa: Evaluating Elasticity of Cloud-Hosted DBMS". 2019 IEEE International Conference on Cloud Computing Technology and Science (CloudCom), Sydney, Australia, 2019, pp. 54-61, 2019. DOI 10.1109/CloudCom.2019.00020
  • Daniel Seybold, Stefan Wesner, and Jörg Domaschka "King louie: reproducible availability benchmarking of cloud-hosted DBMS". In Proceedings of the 35th Annual ACM Symposium on Applied Computing (SAC '20). Association for Computing Machinery, New York, NY, USA, 144–153. 2020. DOI 10.1145/3341105.3373968
  • Johannes Grohmann, Daniel Seybold, Simon Eismann, Mark Leznik, Samuel Kounev, and Jörg Domaschka, "Baloo: Measuring and modeling the performance configurations of distributed DBMS," in 2020 IEEE 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS), 2020.
  • Jörg Domaschka, Simon Volpert, and Daniel Seybold "Hathi: An MCDM-based Approach to Capacity Planning for Cloud-hosted DBMS" in 2020 IEEE/ACM 7th International Conference on Utility and Cloud Computing (UCC). IEEE, 2020.