SCAR
Serverless Container-aware ARchitectures (e.g. Docker in AWS Lambda)
SCAR is a framework to transparently execute containers out of Docker images in AWS Lambda. It supports images from Docker Hub and it is integrated with an API Gateway to expose an application via a highly-available HTTP-based REST API that supports both synchronous and asynchronous invocations. It is also integrated with AWS Batch to dynamically deploy elastic computing clusters, even with GPU support. SCAR allows the creation of serverless workflows by combining functions that run on either AWS Batch or AWS Lambda, thus effectively creating highly-scalable cross-services serverless workflows. It is also integrated with OSCAR using the same Functions Definition Language (FDL).
Execution of event-driven workflows with Docker-based data processing activities that can scale to a large number of simultaneously processed jobs. A CLI (Command-Line Interface) is provided to interact with the tool.
SCAR, within AI-SPRINT, provides several benefits. First of all, its integration with OSCAR allows the use of the Functions Definition Language to define data-driven serverless workflows along the computing continuum. Second, it has been integrated with the native support for Docker containers recently introduced in 2021 for AWS Lambda, a capability that SCAR pioneered in 2017. Third, persistent support for the Lambda function invocations through Amazon EFS has been integrated. Fourth, it has been integrated with rCUDA to leverage remote GPUs for serverless workloads running in AWS Lambda. This allows extending inference workflows into a public cloud using the FaaS computational model.