Agents (Runs & VCS)
Whether you want to use an agent to execute runs in your own network or use a private VCS provider with scalr.io without opening it to the internet, self-hosted agents give you the extra layer of security and control to meet your requirements.
Agents for Runs
By default, when executing a Terraform or OpenTofu run in scalr.io, it will execute on a shared pool of resources that are maintained by Scalr. This method suffices for the majority of use cases, but sometimes there are use cases due to security, compliance, or network requirements that require the runs to be executed on a self-hosted pool of agents. The Scalr self-hosted agent pools are deployed on your infrastructure, fully encrypted, and will only need network access back to Scalr.io to report the run results. Scalr.io will never need network access back to the agent.
Run agents are not included in the Scalr.io run concurrency. Each agent will have a limit of 5 concurrent runs at time to avoid overloading them. The agent was decoupled from the scalr.io concurrency limit to allow customers to control their own concurrency if needed.
Example: If you have 5 concurrent runs on the scalr.io runners and two self hosted agents running, you will have 15 concurrent runs.
Agents for VCS Providers
Scalr relies on access to VCS providers to pull Terraform or OpenTofu code to workspaces, populate the module registry, or pull in OPA policies. If you have a VCS provider that is behind a firewall, it is unlikely that you'll want to open it to scalr.io and expose it to the internet. VCS agents create a relay between the VCS provider and scalr.io so that the VCS provider only needs to connect to the agent and the agent has a HTTPS connection to scalr.io. VCS agents can only be used with GitHub Enterprise and GitLab Enterprise at this time.
Configuring Agent Pools
Prerequisites:
- Agents can be deployed on:
- Rocky Linux 9.x
- Ubuntu 20.04/22.04
- Docker (version 18+) containers
- Kubernetes - The helm chart for Kubernetes can be found here.
- The agents must have HTTPS connections to
*scalr.io
and*docker.io
. - For run agents: Agent sizing depends on your workloads. For the majority of workloads, 512MB of RAM and 1 CPU allocated for each run/container will be sufficient. For larger workloads, you may need to increase the memory size to ensure sufficient memory. If you need more than one concurrent run, the sizing to consider is calculated with
RAM x Concurrency
, where RAM is the amount of RAM allocated for a container and concurrency is how many parallel runs are required. For example, if two concurrent runs are needed, then the sizing should be 1024MB RAM. Free RAM is the main factor with agents, always ensure there is enough allocated for the OS to continue to run as well. Each agent currently has a max of five concurrent runs.
Agent pools are created at the account scope and assigned to workspaces. To create a pool, go to the account scope, click create agent pool, select VCS or Runs, and then follow the in-app instructions:
Run Agent Configuration:
Once the agent is registered, you can link the pool to workspaces:
VCS Agent Configuration:
Once the agent is registered, you can now set up the VCS provider and select the agent as the last step of the VCS setup:
Managing Agent Pools
Once a pool is created, you can check the status of agents in the pool:
The logs for the agents can be seen by running journalctl -xe -u scalr-agent
on the instance that the agent is running on.
Each pool can be managed individually and only deleted if the pool is not linked to a workspace.
Run the Agent as Root
Agents can be updated to run as root
giving you the privilege to configure (i.e. add apt repositories, install software via apt-get, add rootCA certificates, etc…) the container through local-exec
. To configure the agent as root
, run the following commands:
sudo scalr-agent configure --user=root
sudo systemctl daemon-reload scalr-agent
sudo systemctl restart scalr-agent # if agent is already running
sudo systemctl start scalr-agent # if agent is not running
Customizing the Agent
If you need to customize the agent to add software, certs, or anything else that a Terraform run might need, you can do so with the following:
Create a Docker file that points to the Scalr Docker image, update the version as needed, and then add the customization:
FROM scalr/terraform:1.0.0
ADD ...
RUN ...
Once the Docker file is done, run the following command to build the image:
/opt/scalr-agent/embedded/bin/docker build . -t scalr/terraform:1.0.0
IMPORTANT: The image must be named scalr/terraform:<version>
to ensure Scalr uses it.
Adding a CA bundle
If your agent needs to connect to an internal service that requires your own CA bundle, you can add the certificate in the following ways:
- Kubernetes-based agents: Use the
agent.container_task_ca_cert
setting in the helm chart to path to the certificate. See mere here. - Docker-based agents: Set the path to the certificate in a
SCALR_CONTAINER_TASK_CA_CERT
environment variable. - VM-based agents: Set the path to the certificate via the
container_task_ca_cert
config option in/etc/scalr-agent/agent.conf
Adding a Proxy
If the agent requires a proxy to get back to scalr.io, please create a system drop-in directory
mkdir -p /etc/systemd/system/scalr-agent.service.d/
Create the /etc/systemd/system/scalr-agent.service.d/proxy.conf
file, with the following contents:
[Service]
Environment="HTTP_PROXY=http://proxy.example.com:3128"
Environment="HTTPS_PROXY=http://proxy.example.com:3128"
Symlink the proxy.conf
into the scalr-docker drop-in.
mkdir -p /etc/systemd/system/scalr-docker.service.d/
ln -s /etc/systemd/system/scalr-agent.service.d/proxy.conf \
/etc/systemd/system/scalr-docker.service.d/proxy.conf
Once the above is added, execute the following commands:
systemctl daemon-reload
systemctl restart scalr-docker
systemctl restart scalr-agent
Updated about 1 month ago