Agents (Runs & VCS)
Whether you want to use an agent to execute runs in your own network or use a private VCS provider with scalr.io without opening it to the internet, self-hosted agents give you the extra layer of security and control to meet your requirements.
Agents for Runs
By default, when executing a Terraform or OpenTofu run in scalr.io, it will execute on a shared pool of resources that are maintained by Scalr. This method suffices for the majority of use cases, but sometimes there are use cases due to security, compliance, or network requirements that require the runs to be executed on a self-hosted pool of agents. The Scalr self-hosted agent pools are deployed on your infrastructure, fully encrypted, and will only need network access back to Scalr.io to report the run results. Scalr.io will never need network access back to the agent.
Run agents are not included in the Scalr.io run concurrency. Each agent will have a limit of 5 concurrent runs at time to avoid overloading them. The agent was decoupled from the scalr.io concurrency limit to allow customers to control their own concurrency if needed.
Example: If you have 5 concurrent runs on the scalr.io runners and two self hosted agents running, you will have 15 concurrent runs.
Agents for VCS Providers
Scalr relies on access to VCS providers to pull Terraform or OpenTofu code to workspaces, populate the module registry, or pull in OPA policies. If you have a VCS provider that is behind a firewall, it is unlikely that you'll want to open it to scalr.io and expose it to the internet. VCS agents create a relay between the VCS provider and scalr.io so that the VCS provider only needs to connect to the agent and the agent has a HTTPS connection to scalr.io. VCS agents can only be used with GitHub Enterprise and GitLab Enterprise at this time.
Configuring Agent Pools
Prerequisites:
- Run and VCS agents SHOULD NOT be deployed on the same infrastructure.
- Agents can be deployed on:
- Rocky Linux 9.x
- Ubuntu 20.04/22.04
- Docker (version 18+) containers
- Kubernetes - The helm chart for Kubernetes can be found here.
- The agents must have HTTPS connections to
*scalr.io
and*docker.io
. - For run agents: Agent sizing depends on your workloads. For the majority of workloads, 512MB of RAM and 1 CPU allocated for each run/container will be sufficient. For larger workloads, you may need to increase the memory size to ensure sufficient memory. If you need more than one concurrent run, the sizing to consider is calculated with
RAM x Concurrency
, where RAM is the amount of RAM allocated for a container and concurrency is how many parallel runs are required. For example, if two concurrent runs are needed, then the sizing should be 1024MB RAM. Free RAM is the main factor with agents, always ensure there is enough allocated for the OS to continue to run as well. Each agent currently has a max of five concurrent runs.
Agent pools are created at the account scope and assigned to workspaces. To create a pool, go to the account scope, expand the inventory menu, and click agent pools. select VCS or Runs, and then follow the in-app instructions:
Run Agent Configuration:
Once the agent is registered, you can link the pool to workspaces:
VCS Agent Configuration:
Once the agent is registered, you can now set up the VCS provider and select the agent as the last step of the VCS setup:
Managing Agent Pools
Once a pool is created, you can check the status of agents in the pool:
Logs
The logs for the agents can be seen by running the following commands depends on the platform the agent is running on:
- VM:
journalctl -xe -u scalr-agent > scalr_agent.logs
- Docker:
docker logs <container-name>
- Kubernetes:
kubectl logs <POD_NAME>
Run the Agent as Root
Agents can be updated to run as root
giving you the privilege to configure (i.e. add apt repositories, install software via apt-get, add root CA certificates, etc.) the container through local-exec
. To configure the agent as root
, run the following commands:
sudo scalr-agent configure --user=root
sudo systemctl daemon-reload && systemctl restart scalr-agent # if agent is already running
sudo systemctl daemon-reload && systemctl start scalr-agent # if agent is not running
Customizing the Agent
The instructions below are for VM or Docker based deployments. For Kubernetes-based agents, see the helm chart options here.
If you need to customize the agent to add software, certs, or anything else that a Terraform run might need, you can do so with the following:
Create a Docker file that points to the Scalr Docker image, update the version as needed, and then add the customization:
FROM scalr/terraform:1.0.0
ADD ...
RUN ...
Once the Docker file is done, run the following command to build the image:
/opt/scalr-agent/embedded/bin/docker build . -t scalr/terraform:1.0.0
IMPORTANT: The image must be named scalr/terraform:<version>
to ensure Scalr uses it.
Adding a CA bundle
The instructions below are for VM or Docker based deployments. For Kubernetes-based agents, use the agent.container_task_ca_cert
setting in the helm chart to path to the certificate. See more here.
To configure SSL certificates globally, use the SCALR_CA_CERT
variable option. To configure SSL certificates only for isolated containers for the tasks (e.g. tofu/terraform/infracost operations), set the SCALR_CONTAINER_TASK_CA_CERT
option.
The CA file can be located on the agent's VM, allowing a certificate to be selected by its file path. If the agent is running within Docker, ensure the certificate is mounted into the agent container.
Alternatively, a base64-encoded string containing the certificate bundle can be used. Example of encoding a bundle:
$~ cat /path/to/bundle.ca | base64
Example of running agent with custom CA certificates with a Docker deployment method:
$~ docker run
-v /var/run/docker.sock:/var/run/docker.sock
-v /var/lib/scalr-agent:/var/lib/scalr-agent
-e SCALR_URL=https\://<account>.scalr.io
-e SCALR_TOKEN=<token>
-e SCALR_DATA_HOME=/var/lib/scalr-agent
-e SCALR_CA_CERT=/var/lib/scalr-agent/ca.cert
--rm -it --pull=always --name=scalr-agent scalr/agent:latest run
Note that the certificate is located in the /var/lib/scalr-agent/
directory, which is mounted into the container.
You can optionally bundle your certificate into an agent image. Place the custom CA file at extra_ca_root.crt
and build the customized image:
FROM scalr/agent:latest
ADD extra_ca_root.crt /usr/local/share/ca-certificates/extra-ca.crt
RUN apt update
&& apt install ca-certificates -y
&& chmod 644 /usr/local/share/ca-certificates/extra-ca.crt
&& update-ca-certificates
ENV SCALR_CA_CERT="/etc/ssl/certs/ca-certificates.crt"
This step also bundles your certificate with the set of public certificates provided by ca-certificates system package. You can optionally skip this step and instead point SCALR_CA_CERT
to your certificate if it already includes public CA certificates or if they are not needed (e.g., in a setup completely hidden behind a proxy).
Note that by default, the scalr agent uses the certificate bundle provided by the certifi package instead of the system certificate bundle provided by the ca-certificates package.
Adding a Proxy
The instructions below are for VM or Docker based deployments. For Kubernetes-based agents, see the proxy settings in the helm chart here.
VM-Based
For a VM, if the agent requires a proxy to get back to scalr.io, please create a system drop-in directory
mkdir -p /etc/systemd/system/scalr-agent.service.d/
Create the /etc/systemd/system/scalr-agent.service.d/proxy.conf
file, with the following contents:
[Service]
Environment="HTTP_PROXY=http://proxy.example.com:3128"
Environment="HTTPS_PROXY=http://proxy.example.com:3128"
Symlink the proxy.conf
into the scalr-docker drop-in.
mkdir -p /etc/systemd/system/scalr-docker.service.d/
ln -s /etc/systemd/system/scalr-agent.service.d/proxy.conf \
/etc/systemd/system/scalr-docker.service.d/proxy.conf
Once the above is added, execute the following commands:
systemctl daemon-reload
systemctl restart scalr-docker
systemctl restart scalr-agent
Docker-Based
For Docker, add the option environment variables (HTTP_PROXY, HTTPS_PROXY, NO_PROXY
):
$~ docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /var/lib/scalr-agent:/var/lib/scalr-agent \
-e SCALR_URL=https://<account>.scalr.io \
-e SCALR_TOKEN=<token> \
-e SCALR_DATA_HOME=/var/lib/scalr-agent \
-e HTTP_PROXY="<proxy-address>" \
-e HTTPS_PROXY="<proxy-address>" \
-e NO_PROXY="<addr1>,<addr2>" \
--rm -it --pull=always --name=scalr-agent scalr/agent:latest run
Other Configuration Options
Kubernetes Deployments
The instructions below are for VM or Docker based deployments. For Kubernetes-based agents, see the helm chart here.
Docker & VM Deployments
The Docker and VM based agents both use the same underlying application via a Docker backend, all options seen below can be applied to both, but are done so in a different way.
For example, to customize using the Docker installation method:
$~ docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /var/lib/scalr-agent:/var/lib/scalr-agent \
-e SCALR_URL=https://<account>.scalr.io \
-e SCALR_TOKEN=<token> \
-e SCALR_DATA_HOME=/var/lib/scalr-agent \
-e HTTP_PROXY="<proxy-address>" \
-e HTTPS_PROXY="<proxy-address>" \
-e NO_PROXY="<addr1>,<addr2>" \
-e SCALR_CONTAINER_TASK_MEM_LIMIT= 16384 \
--rm -it --pull=always --name=scalr-agent scalr/agent:latest run
To customize using the VM (RPM/DEB) method, it would use standard OS environment variables:
export variable=value
Below are all of the variables options that can be used for customization:
Option | Type | Default | Description |
---|---|---|---|
SCALR_CONTAINER_TASK_CPU_REQUEST | float | 1.0 | CPU resource request defined in cores. If your container needs two full cores to run, you would put the value 2 . If your container only needs ¼ of a core, you would put a value of 0.25 cores. |
SCALR_CONTAINER_TASK_CPU_LIMIT | float | 8.0 | CPU resource limit defined in cores. If your container needs two full cores to run, you would put the value 2 . If your container only needs ¼ of a core, you would put a value of 0.25 cores. |
SCALR_CONTAINER_TASK_MEM_REQUEST | int | 1024 | Memory resource request defined in megabytes. |
SCALR_CONTAINER_TASK_MEM_LIMIT | int | 16384 | Memory resource limit defined in megabytes |
SCALR_CONTAINER_TASK_CA_CERT | str | null | The CA certificates bundle to mount it into the container task at /etc/ssl/certs/ca-certificates.crt . The CA file can be located inside the agent VM, allowing selection of a certificate by its path. If running the agent within Docker, ensure the certificate is mounted to an agent container. Alternatively, a base64 string containing the certificate bundle can be used. The example encoding it: cat /path/to/bundle.ca | base64 .The bundle should include both your private CAs and the standard set of public CAs. |
SCALR_CONTAINER_TASK_IMAGE_REGISTRY | str | null | Enforce the use of a custom image registry to pull all container task images. All images must be preemptively pushed to this registry for the agent to work with this option. The registry path may include a repository to be replaced. If the path ends with a trailing slash, it will be appended to the original repository.Example: 'mirror.io', 'mirror.io/myproject' or 'mirror.io/myproject/'. |
Updated 6 days ago