Agents (Runs & VCS)

Whether you want to use an agent to execute runs in your own network or use a private VCS provider with without opening it to the internet, self-hosted agents give you the extra layer of security and control to meet your requirements.

Agents for Runs

By default, when executing a Terraform run in, it will execute on a shared pool of resources that are maintained by Scalr. This method suffices for the majority of use cases, but sometimes there are use cases due to security, compliance, or network requirements that require the runs to be executed on a self-hosted pool of agents. The Scalr self-hosted agent pools are deployed on your infrastructure, fully encrypted, and will only need network access back to to report the run results. will never need network access back to the agent.

Agents for VCS Providers

Scalr relies on access to VCS providers to pull Terraform code to workspaces, populate the module registry, or pull in OPA policies. If you have a VCS provider that is behind a firewall, it is unlikely that you'll want to open it to and expose it to the internet. VCS agents create a relay between the VCS provider and so that the VCS provider only needs to connect to the agent and the agent has a HTTPS connection to VCS agents can only be used with GitHub Enterprise and GitLab Enterprise at this time.

Configuring Agent Pools


  • Agents can be deployed on:
    • CentOS 7.x/8.x (CentOS 7.x support will end on November 24, 2023)
    • RedHat 7.x/8.x/9.x
    • Rocky Linux 9.x
    • Ubuntu 18.04/20.04/22.04
    • Docker (version 18+) containers
    • Kubernetes (beta). The helm chart for Kubernetes can be found here.
  • If RHEL/CentOS 7.x is used, the scalr-agent package has a dependency on the container-SELinux package from the “extras” repository. If you do not have that repository enabled, please do so by following the Enable Extras Repo page.
  • The agents must have HTTPS connections to * and *
  • For run agents: Agent sizing depends on your workloads. For the majority of workloads, 512MB of RAM and 1 CPU allocated for each run/container will be sufficient. For larger workloads, you may need to increase the memory size to ensure sufficient memory. If you need more than one concurrent run, the sizing to consider is calculated with RAM x Concurrency, where RAM is the amount of RAM allocated for a container and concurrency is how many parallel runs are required. For example, if two concurrent runs are needed, then the sizing should be 1024MB RAM. Free RAM is the main factor with agents, always ensure there is enough allocated for the OS to continue to run as well. Each agent currently has a max of five concurrent runs.


Beta Kubernetes Installation

Kuberenetes support for the agent is in beta and you will not see instruction in the UI yet. To register the Kubernetes agent, you only need the SCALR_TOKEN and SCALR_URL and then follow the instructions in the readme. Once completed, the agent will automatically appear in the UI.

Agent pools are created at the account scope and assigned to workspaces. To create a pool, go to the account scope, click create agent pool, select VCS or Runs, and then follow the in-app instructions:


Run Agent Configuration:

Once the agent is registered, you can link the pool to workspaces:


VCS Agent Configuration:

Once the agent is registered, you can now set up the VCS provider and select the agent as the last step of the VCS setup:

Managing Agent Pools

Once a pool is created, you can check the status of agents in the pool:


The logs for the agents can be seen by running journalctl -xe -u scalr-agent on the instance that the agent is running on.

Each pool can be managed individually and only deleted if the pool is not linked to a workspace.

Run the Agent as Root

Agents can be updated to run as root giving you the privilege to configure (i.e. add apt repositories, install software via apt-get, add rootCA certificates, etc…) the container through local-exec. To configure the agent as root, run the following commands:

sudo scalr-agent configure --user=root
sudo systemctl daemon-reload scalr-agent
sudo systemctl restart scalr-agent # if agent is already running
sudo systemctl start scalr-agent # if agent is not running

Customizing the Agent

If you need to customize the agent to add software, certs, or anything else that a Terraform run might need, you can do so with the following:

Create a Docker file that points to the Scalr Docker image, update the version as needed, and then add the customization:

FROM scalr/terraform:1.0.0
ADD ...
RUN ...

Once the Docker file is done, run the following command to build the image:

/opt/scalr-agent/embedded/bin/docker build . -t scalr/terraform:1.0.0

IMPORTANT: The image must be named scalr/terraform:<version>to ensure Scalr uses it.

Adding a Proxy

If the agent requires a proxy to get back to, please create a system drop-in directory

mkdir -p  /etc/systemd/system/scalr-agent.service.d/

Create the /etc/systemd/system/scalr-agent.service.d/proxy.conf file, with the following contents:


Symlink the proxy.conf into the scalr-docker drop-in.

mkdir -p /etc/systemd/system/scalr-docker.service.d/
ln -s /etc/systemd/system/scalr-agent.service.d/proxy.conf \

Once the above is added, execute the following commands:

systemctl daemon-reload
systemctl restart scalr-docker
systemctl restart scalr-agent

Persistent Containers (Beta)

The shell variable SCALR_FEATURE_FLAGS=force-atasks-v2 allows for a persistent Docker container during all phases of a Terraform run. Previously, the container would be created per Terraform phase, which created the need to use custom hooks at every phase if you wanted to maintain any customizations throughout the run. The container will still be destroyed after the run finishes.

The shell variable should be set at the Scalr account scope to make it available to all agents across all environments and workspaces.

Agent version 0.2.0 or higher is required.