By default, when executing a Terraform run in scalr.io, it will execute on a shared pool of resources that are maintained by Scalr. This method suffices for the majority of use cases, but sometimes there are use cases due to security, compliance, or network requirements that require the runs to be executed on a self-hosted pool of agents. The Scalr self-hosted agent pools are deployed on your infrastructure, fully encrypted, and will only need network access back to Scalr.io to report the run results. Scalr.io will never need network access back to the agent.
- Agents can be deployed on RHEL/CentOS 7.x/8.x. Ubuntu 18.04, or Docker (version 18+) containers. If RHEL/CentOS 7.x is used, the scalr-agent package has a dependency on the
container-SELinuxpackage from the “extras” repository. If you do not have that repository enabled, please do so by following the Enable Extras Repo page.
- The agents must have HTTPS connections to scalr.io and docker.io.
- Agent sizing depends on your workloads. For the majority of workloads, 512MB of RAM and 1 CPU allocated for each run/container will be sufficient. For larger workloads, you may need to increase the memory size to ensure sufficient memory. If you need more than one concurrent run, the sizing to consider is calculated with
RAM x Concurrency, where RAM is the amount of RAM allocated for a container and concurrency is how many parallel runs are required. For example, if two concurrent runs are needed, then the sizing should be 1024MB RAM. Free RAM is the main factor with agents, always ensure there is enough allocated for the OS to continue to run as well.
Agent pools are created at the account scope and assigned to workspaces. To create a pool, go to the account scope, click create agent pool, and register an agent. Follow the in-app instructions on how to do the installation:
Once the agent is registered, you can link the pool to workspaces:
Once a pool is created, you can check the status of agents in the pool:
The logs for the agents can be seen by running
journalctl -xe -u scalr-agent on the instance that the agent is running on.
Each pool can be managed individually and only deleted if the pool is not linked to a workspace.
Agents can be updated to run as
root giving you the privilege to configure (i.e. add apt repositories, install software via apt-get, add rootCA certificates, etc…) the container through
local-exec. To configure the agent as
root, run the following commands:
sudo scalr-agent configure --user=root sudo systemctl daemon-reload scalr-agent sudo systemctl restart scalr-agent # if agent is already running sudo systemctl start scalr-agent # if agent is not running
In future releases, you will be able to fully customize the Docker images that are running on the agent.
If the agent requires a proxy to get back to scalr.io, please create a system drop-in directory
mkdir -p /etc/systemd/system/scalr-agent.service.d/
/etc/systemd/system/scalr-agent.service.d/proxy.conf file, with the following contents:
[Service] Environment="HTTP_PROXY=http://proxy.example.com:3128" Environment="HTTPS_PROXY=http://proxy.example.com:3128"
proxy.conf into the scalr-docker drop-in.
mkdir -p /etc/systemd/system/scalr-docker.service.d/ ln -s /etc/systemd/system/scalr-agent.service.d/proxy.conf \ /etc/systemd/system/scalr-docker.service.d/proxy.conf
Once the above is added, execute the following commands:
systemctl daemon-reload systemctl restart scalr-docker systemctl restart scalr-agent
In a future release, Scalr will be able to restrict and enforce which environments use which pools from the account scope.
Updated about 2 months ago