Drivers

The Scalr Agent is built to run and isolate concurrent Scalr runs within a single agent instance. Task isolation is controlled by the SCALR_AGENT_DRIVER option. It determines how the Scalr Agent creates task execution environments for running user-executable code (such as OpenTofu/Terraform applies).

Scalr Agents can perform runs in the containerized drivers like Docker or Kubernetes, or run tasks without isolation via Local driver.

By default, the driver is auto-selected between docker and kubernetes based on the backend environment. These drivers manage container orchestration and isolation by spawning fresh containers for each run stage via the Docker or Kubernetes API, allowing a single agent instance to manage multiple runs simultaneously.

Local Driver

The local driver executes all workloads locally without any isolation and without additional platform requirements.

When this driver is enabled, the Scalr Agent executes tasks directly in the same environment it is running in, without isolation. To enable the local driver, start the agent with the --local command-line flag or set the SCALR_AGENT_DRIVER=local configuration option.

The local driver is useful when running agents in environments that don’t require isolation themselves — such as serverless platforms (AWS Fargate, Cloud Run, Azure Container Apps, etc.) — or when you don’t want to grant agents access to the Docker socket and prefer to manage orchestration independently.

The local driver is best used with Single Mode or by setting SCALR_AGENT_CONCURRENCY to 1 to ensure that only one run stage is executed at a time in such an environment.

🚧

When using the local driver, the OpenTofu/Terraform executable and workspace hook scripts will have access to the host machine and its files. Ensure the Scalr Agent is running in a secure and isolated environment when using this driver.

Docker Driver

The Docker driver starts each run environment in a separate Docker container. For this driver to work, the Scalr Agent must have access to the Docker socket. Once access is granted, the agent runs each operation inside an isolated Docker container with limited CPU, memory, and disk access.

See the Docker driver configuration for details.

Kubernetes Driver

The Kubernetes driver starts each run environment in a separate container within an individual Kubernetes Pod. For this driver to work, the Scalr Agent must have access to the Kubernetes API server. The Kubernetes driver uses a controller/worker architecture: a single controller instance pulls tasks from the Scalr Platform and schedules them on the Kubernetes cluster, where one or more workers can pick them up.

The agent-k8s Helm chart provides a ready-to-use deployment that utilizes this driver.

See the Kubernetes driver configuration for details.

Container Mounts

The table below explains how paths are mounted for Docker and Kubernetes drivers:

Mount PathDescription
/opt/dataMount point for data files used during container execution.
/opt/workdirMount point for OpenTofu/Terraform configuration directory.
/opt/workdir/.sshStores SSH keys used for secure container access.
/opt/workdir/.cacheXDG cache directory used by scalr-cli. Docs
/opt/providers/runDirectory for provider plugins used by Terraform/OpenTofu. Docs
/opt/providers/cacheThe read-only Provider Cache managed by Scalr Agent. Mounted via filesystem_mirror.
/opt/binStores binaries needed for container operations.
/etc/ssl/certs/ca-certificates.crtPath for mounting a CA bundle within the container.
/usr/bin/_entrypoint.shMount path to the entrypoint script inside the container.

Container Entrypoint

The entrypoint at /usr/bin/_entrypoint.sh is a portable, shell-based inter-process communication mechanism. It executes a sequence of shell commands and communicates results via the filesystem. The script is designed for compatibility with POSIX environments and network filesystems, ensuring proper signal propagation and a consistent single-shell-variable environment.