Workspace Settings
Provider Configurations
Provider configurations are the method by which you can authenticate against Terraform providers through Scalr. The configurations are stored in the Scalr account, shared with environments and/or workspaces, and automatically passed to the Terraform runs.
The majority of times, a configuration is set by an admin for an environment and enabled as the default, which means that no further configuration is needed within a workspace.
But, the in the event that the admins want to leave the configuration selection up to the workspace owner, the configuration can be selected at the workspace level if the user has the proper permissions (cloud-credentials:read
at the account scope, workspaces:update
at any scope). For example, you might share 2 AWS credentials with an environment, prod, and non-prod, and want the correct credential for the workspace to be chosen at the workspace level.
Custom Hooks
Custom hooks are used to customize the core Terraform workflow. It is a common requirement to have to run a command, script, or API call before or after the Terraform plan and/or apply events. For example, many customers run lint tests before the plan to ensure the Terraform code is formatted correctly or install software before the apply if it is needed for the Terraform code to execute correctly.
If a command is being used in the hook, nothing else is needed except typing the command into the text box, shell variables can be referenced if required.
If a script is being used, ensure that the script is being uploaded as part of the configuration files with the Terraform code. Optionally, the script can also be downloaded (i.e. wget https://script.sh | sh
). If a script is being used, please ensure it has execute permissions (i.e. chmod +x <filename>
).
If you are using pip
to install a binary, the binary will be installed under /tmp/.local/bin
.
Custom hooks are added as part of the workspace creation or after a workspace is created by going into the workspace settings:

The output of the hooks can be seen directly in the console output for the plan and apply.
Hook Examples
Import Resources
A common use case we see with the pre-plan hooks is to import resources into Terraform state. Rather than downloading the state, manipulating it, and pushing it back into Scalr, you can do all of this directly in a pre-plan hook:
terraform import aws_instance.example <Instance ID>
Pulling Plan Details
In the case where the Terraform plan might need to be exported and used externally, it can be pulled by using the command below before the plan or apply, or after apply:
terraform show -json /opt/data/terraform.tfplan.bin
Agent Pools
Want to execute the Terraform run on your own infrastructure? Define which workspaces should use agent pools, which only need access to pull from scalr.io and never access from scalr.io to your network.
Agent pools are also helpful if you don't want to store cloud credentials in Scalr. Agents can inherit the permissions of the IAM role of the service account or profile that is assigned to it and pass that to the Terraform run. Agents are also helpful if you need to connect to an internal secrets management service.
Tags
Tagging in Scalr gives you the ability to organize your workspaces and environments even more. It's common that you might want to add an extra layer of organization within an environment, which is where tags come into play as you can filter based on tags in the UI, API, and provider:

Run Triggers
Run triggers are a way to chain workspaces together. The use case for this is that you might have one or more upstream workspaces that need to automatically kick off a downstream workspace based on a successful run in the upstream workspace. To set a trigger, go to the downstream workspace and set the upstream workspace(s). Now, whenever the upstream workspace has a successful run, the downstream workspace will automatically start a run.
If more than one (up to 50) workspace is added as the upstream, a successful run in any upstream workspace will trigger the downstream workspace run. For example, if two upstream workspaces finish at the exact same time, then the downstream workspace will have two runs queued.
The permissions required for a user to set the triggers are
- Downstream workspace requires
workspaces:update
- Upstream workspace requires
workspaces: read
If the downstream workspace has auto-apply
enabled, then the apply will automatically occur once the trigger happens. If not, it will wait for approval.
Run Scheduler
The run scheduler is a way to automatically trigger recurring runs based on a defined schedule. This is commonly used by customers to determine drift in Terraform state. The schedule can be set to execute a run every day, on a specific day(s), or using cron. If you want to schedule a one-time run, please see Schedule a Run in the Runs section below. The schedule can be created for a plan/apply that updates/creates resources or a destructive run, equivalent to terraform destroy
. The approval of runs will depend on your workspace settings, if the setting has auto-approval set, the run will automatically apply, if not it will wait for manual confirmation before applying. All run schedules will be assigned in the UTC timezone, please convert to your time zone to ensure the runs are scheduled properly.
The most common use case, among others, for the run scheduler is to ensure the creation and destruction of development workspaces on a specific schedule to avoid unwanted costs.
Execution Mode
By default, the execution mode of a workspace is set to remote, which means the run will execute in Scalr. If you have the need to run it locally, the execution mode flag can be set per workspace through the Scalr Terraform provider:
resource "scalr_workspace" "infra" {
name = "infra"
environment_id = "org-123456"
execution_mode = "local"
}
See the full workspace provider docs here.
Run Timeout
By default, the run timeout for a workspace is 60 minutes. You can update this timeout to anything between 10 and 720 minutes via the Scalr API or provider. See run-operation-timeout
in the provider documentation. This setting will be added to the UI soon.
State Rollback
The state rollback feature allows you to easily roll back to a former state file in the event of an error or corrupt file. This will not update or change your resources unless a new Terraform run is executed and a change is detected. The rollback option can be found on the page of the old state file:
Deletion Protection
By default, all workspaces have workspace deletion protection enabled when a workspace has active state. Active state is defined as a workspace that has resources under management in the state file. To delete a workspace, you can either run a destructive apply, which removes the resources from state and then disables the workspace protection, or manually disable workspace protection and delete the workspace. WARNING: If you manually disable the workspace protection and destroy the workspace, the state is not recoverable.
Force Run Option
Deployments in a Terraform workflow can be highly dynamic with a high rate of change. With the force run setting, you can ensure the latest commit/run is always the one that has priority. As one of our customers stated, "the last write wins". If the force run feature is used, all pending runs that could potentially delay the latest run will be canceled and the run with the force flag will be bumped to the top of the queue. Active runs will never be canceled, only runs that are in a pending state.
In the case of a dry run, the force run option will only cancel the runs linked to the source branch.
This is only controlled through the Scalr API and provider for now. The workspace attribute is force_latest_run
. The default is false
.
resource "scalr_workspace" "infra" {
name = "infra"
environment_id = "org-123456"
force_latest_run = "true"
}
See the full workspace provider docs here.
Auto Queue Runs
Being able to automate the management of Scalr is a key component for the majority of our customers. Many customers create "vending machines", which create workspaces based on a certain set of criteria and automatically onboard teams and/or apps with those workspaces. The auto-queue runs feature helps with another step in that automation by giving users the ability to automatically kickoff runs after the initial configuration files are uploaded into a workspace. It is not required to trigger the initial run through the UI or API. This is controlled through the following settings via the Scalr API or provider:
skip_first
- (default) A run will not be triggered when the initial workspace is created, but subsequent runs will be triggered upon new configuration files being uploaded.always
- A run will be triggered as soon as the configuration files are uploaded, including the initial files.never
- Runs will not be triggered automatically based on configuration files being added to the workspace.
resource "scalr_workspace" "infra" {
name = "infra"
environment_id = "org-123456"
auto_queue_runs = "always"
}
See the full workspace provider docs here.
.terraformignore
To optimize the speed at which Scalr clones the repository where your Terraform config resides or to just ignore files, Scalr will accept a .terraformignore
file. Any files listed in the .terraformignore
will be excluded during the clone operation. This is helpful if you have a large monorepo and you only want to focus on specific folders.
Container Image Info
Regardless of the workspace type, Terraform runs occur within a Docker container that is running on the Scalr infrastructure. The container is based on standard Debian Linux and has the tools below installed already. If you need to execute runs outside of the Scalr infrastructure, you can do this through Self Hosted Agent Pools.
The following tools are already installed on the image:
Name | Description |
---|---|
AWS CLI | Used to interact with AWS. |
Azure CLI | Used to interact with Azure. See setup instructions below. |
Google CLI | Used to interact with Google Cloud. See setup instructions below. |
pip3 | Pip is the package installer for Python. |
kubectl | Kubectl can be used to manage Kubernetes in your Terraform code. |
AWS CLI:
Nothing is needed as the AWS CLI can read the $AWS_ACCESS_KEY_ID
and $AWS_SECRET_ACCESS_KEY
shell variables that Scalr passes.
Azure CLI:
az login --service-principal -u $ARM_CLIENT_ID -p $ARM_CLIENT_SECRET --tenant $ARM_TENANT_ID
az account set --subscription=$ARM_SUBSCRIPTION_ID
Google CLI:
printenv GOOGLE_CREDENTIALS > key.json
gcloud auth activate-service-account --key-file=key.json --project=$GOOGLE_PROJECT
rm -f key.json
Updated 3 days ago