Krake
Krake [ˈkʀaːkə] is an orchestrator engine for containerized and virtualized workloads across distributed and heterogeneous cloud platforms
Krake is an orchestrator engine for containerised and virtualised workloads across distributed and heterogeneous cloud platforms. It creates a thin layer of aggregation on top of the different platforms (e.g., Kubernetes) and presents them through a single interface to the cloud user. The user's workloads are scheduled depending on both user requirements (hardware, latencies) and platform characteristics (energy efficiency, load). The scheduling algorithm can be optimised for example on latencies or energy. Krake can be leveraged for a wide range of application scenarios such as the central management of distributed compute capacities as well as application management in Edge Cloud infrastructures. Together with virtualised infrastructure providing tools (e.g., IM), Krake can also build virtual machines, including Kubernetes clusters in registered cloud backends (OpenStack).
Users get a single point of orchestration management. A user who has deployed or wants to deploy multiple applications across a variety of different cloud providers can use Krake to manage the orchestration of these applications at a single level. This helps with organisation, provides better visibility and saves time in the long run by not having to manage each platform individually. In addition, Krake can use metrics and labels to, for example, always schedule applications on the highest-performing, most cost-effective sites and/or the sites with the lowest latency or energy consumption.
From a developer's point of view, Krake can be easily extended with individual components, as its architecture is loosely coupled. It is also released under an open source licence so anyone can suggest features or fix bugs.
- Compatibility with multiple technologies and Cloud stack components (i.e., versatility)
- Opportunity to optimise energy efficiency, particularly for workloads that are associated with high energy consumption (e.g., training AI algorithms)