Whether you want to use an agent to execute runs in your own network or use a private VCS provider with scalr.io without opening it to the internet, self-hosted agents give you the extra layer of security and control to meet your requirements.
By default, when executing a Terraform run in scalr.io, it will execute on a shared pool of resources that are maintained by Scalr. This method suffices for the majority of use cases, but sometimes there are use cases due to security, compliance, or network requirements that require the runs to be executed on a self-hosted pool of agents. The Scalr self-hosted agent pools are deployed on your infrastructure, fully encrypted, and will only need network access back to Scalr.io to report the run results. Scalr.io will never need network access back to the agent.
Scalr relies on access to VCS providers to pull Terraform code to workspaces, populate the module registry, or pull in OPA policies. If you have a VCS provider that is behind a firewall, it is unlikely that you'll want to open it to scalr.io and expose it to the internet. VCS agents create a relay between the VCS provider and scalr.io so that the VCS provider only needs to connect to the agent and the agent has a HTTPS connection to scalr.io. VCS agents can only be used with GitHub Enterprise and GitLab Enterprise at this time.
- Agents can be deployed on:
- CentOS 7.x/8.x (CentOS 7.x support will end on November 24, 2023)
- RedHat 7.x/8.x/9.x
- Rocky Linux 9.x
- Ubuntu 18.04/20.04/22.04
- Docker (version 18+) containers
- Kubernetes (beta). The helm chart for Kubernetes can be found here.
- If RHEL/CentOS 7.x is used, the scalr-agent package has a dependency on the
container-SELinuxpackage from the “extras” repository. If you do not have that repository enabled, please do so by following the Enable Extras Repo page.
- The agents must have HTTPS connections to
- For run agents: Agent sizing depends on your workloads. For the majority of workloads, 512MB of RAM and 1 CPU allocated for each run/container will be sufficient. For larger workloads, you may need to increase the memory size to ensure sufficient memory. If you need more than one concurrent run, the sizing to consider is calculated with
RAM x Concurrency, where RAM is the amount of RAM allocated for a container and concurrency is how many parallel runs are required. For example, if two concurrent runs are needed, then the sizing should be 1024MB RAM. Free RAM is the main factor with agents, always ensure there is enough allocated for the OS to continue to run as well. Each agent currently has a max of five concurrent runs.
Beta Kubernetes Installation
Kuberenetes support for the agent is in beta and you will not see instruction in the UI yet. To register the Kubernetes agent, you only need the
SCALR_URLand then follow the instructions in the readme. Once completed, the agent will automatically appear in the UI.
Agent pools are created at the account scope and assigned to workspaces. To create a pool, go to the account scope, click create agent pool, select VCS or Runs, and then follow the in-app instructions:
Once the agent is registered, you can link the pool to workspaces:
Once the agent is registered, you can now set up the VCS provider and select the agent as the last step of the VCS setup:
Once a pool is created, you can check the status of agents in the pool:
The logs for the agents can be seen by running
journalctl -xe -u scalr-agent on the instance that the agent is running on.
Each pool can be managed individually and only deleted if the pool is not linked to a workspace.
Agents can be updated to run as
root giving you the privilege to configure (i.e. add apt repositories, install software via apt-get, add rootCA certificates, etc…) the container through
local-exec. To configure the agent as
root, run the following commands:
sudo scalr-agent configure --user=root sudo systemctl daemon-reload scalr-agent sudo systemctl restart scalr-agent # if agent is already running sudo systemctl start scalr-agent # if agent is not running
If you need to customize the agent to add software, certs, or anything else that a Terraform run might need, you can do so with the following:
Create a Docker file that points to the Scalr Docker image, update the version as needed, and then add the customization:
FROM scalr/terraform:1.0.0 ADD ... RUN ...
Once the Docker file is done, run the following command to build the image:
/opt/scalr-agent/embedded/bin/docker build . -t scalr/terraform:1.0.0
IMPORTANT: The image must be named
scalr/terraform:<version>to ensure Scalr uses it.
If the agent requires a proxy to get back to scalr.io, please create a system drop-in directory
mkdir -p /etc/systemd/system/scalr-agent.service.d/
/etc/systemd/system/scalr-agent.service.d/proxy.conf file, with the following contents:
[Service] Environment="HTTP_PROXY=http://proxy.example.com:3128" Environment="HTTPS_PROXY=http://proxy.example.com:3128"
proxy.conf into the scalr-docker drop-in.
mkdir -p /etc/systemd/system/scalr-docker.service.d/ ln -s /etc/systemd/system/scalr-agent.service.d/proxy.conf \ /etc/systemd/system/scalr-docker.service.d/proxy.conf
Once the above is added, execute the following commands:
systemctl daemon-reload systemctl restart scalr-docker systemctl restart scalr-agent
The shell variable
SCALR_FEATURE_FLAGS=force-atasks-v2 allows for a persistent Docker container during all phases of a Terraform run. Previously, the container would be created per Terraform phase, which created the need to use custom hooks at every phase if you wanted to maintain any customizations throughout the run. The container will still be destroyed after the run finishes.
The shell variable should be set at the Scalr account scope to make it available to all agents across all environments and workspaces.
Agent version 0.2.0 or higher is required.
Updated 5 days ago