Computing Continuum • Multi-Cloud • Auto-scaling

Open Source Serverless Computing for Data-Processing Applications

A flexible Virtual Research Environment (VRE) to run Docker-based, compute-intensive workloads with serverless workflows on elastic Kubernetes clusters deployed across multiple clouds.

OSCAR platform illustration

Key Features

Elasticity

Kubernetes clusters scale up and down automatically based on workload.

Built on Kubernetes

Built on Kubernetes components for easier extension and integration."

Open Source

Apache 2.0 License, also available as a managed SaaS.

Edge-Ready Runtime

Runs on ARM-based edge devices (e.g., Raspberry Pi and NVIDIA Jetson Nano).

Scale to Zero

Reduces idle resource usage by scaling services down depending onn the workload.

Observability & Operations

Tracks service status and metrics. Enforces quota allocations.

Serverless for Compute-Intensive Processing

Serverless for Compute-Intensive Processing

OSCAR provides data-driven serverless computing for data-processing applications. Services can be triggered by file uploads to an object storage backend, executing a user-defined shell script inside a container based on a user-defined Docker image. Executions are orchestrated as Kubernetes batch jobs, and output data can be uploaded to supported object storage backends. Synchronous invocations with scale-to-zero support and exposed services for those who provide APIs are also available.

Support for Multiple Storage Back-ends

Support for Multiple Storage Back-ends

Each OSCAR cluster includes MinIO so file uploads can trigger data-processing applications. Services can be chained to build data-driven workflows. Output storage also supports other backends, including Amazon S3 and EGI DataHub (based on Onedata).

Kubernetes-based Architecture

Kubernetes-based Architecture

An OSCAR cluster is built on dynamically deployed, elastic Kubernetes infrastructure. With the CLUES elasticity system, clusters self-adapt to incoming workload by scaling node capacity up to the deployment limits you define.

Automated Deployment on Multi-Clouds Automated Deployment on Multi-Clouds Automated Deployment on Multi-Clouds Automated Deployment on Multi-Clouds Automated Deployment on Multi-Clouds Automated Deployment on Multi-Clouds

Automated Deployment on Multi-Clouds

Provision OSCAR clusters through the Infrastructure Manager (IM) with a guided, streamlined workflow. From a single interface you can select your target cloud, apply deployment settings, and launch a reproducible OSCAR cluster in minutes.

Serverless Workflows for the Cloud Computing Continuum Serverless Workflows for the Cloud Computing Continuum Serverless Workflows for the Cloud Computing Continuum

Serverless Workflows for the Cloud Computing Continuum

OSCAR integrates with SCAR, an open-source tool for running generic applications on AWS Lambda (AWS Functions as a Service). OSCAR can also run on ARM-based edge devices such as Raspberry Pi and NVIDIA Jetson Nano boards. This enables serverless workflows across the cloud computing continuum: lightweight processing can run on-premises or at the edge, while heavier workloads run in AWS Lambda. SCAR also integrates with AWS Batch, enabling event-driven workflows for compute-intensive applications or workloads that require specialized hardware such as GPUs.

An Integrated Dashboard An Integrated Dashboard An Integrated Dashboard An Integrated Dashboard An Integrated Dashboard An Integrated Dashboard An Integrated Dashboard An Integrated Dashboard

An Integrated Dashboard

Manage the full OSCAR lifecycle from a web-based dashboard: access clusters securely, configure buckets and services, compose workflows, connect Jupyter notebook-based environments, and monitor platform status in real time.

Ready to deploy OSCAR

Ready to get started?

Deploy an OSCAR cluster on your preferred cloud through the IM Dashboard. No registration is required. Not ready yet? Start with the documentation and come back when you are ready.

Deploy on a Cloud