Workspace Settings

Provider Configurations

Provider configurations are the methods by which you can authenticate against Terraform and OpenTofu providers through Scalr. The configurations are stored in the Scalr account, shared with environments and/or workspaces, and automatically passed to the Terraform runs.

The majority of times, a configuration is set by an admin for an environment and enabled as the default, which means that no further configuration is needed within a workspace.

But, if the admins want to leave the configuration selection up to the workspace owner, the configuration can be selected at the workspace level if the user has the proper permissions (cloud-credentials:read at the account scope, workspaces:update at any scope). For example, you might share 2 AWS credentials with an environment, prod, and non-prod, and want the correct credential for the workspace to be chosen at the workspace level.

IaC Platform

Each workspace must have the IaC platform set, the choices are either Terraform or OpenTofu. Terraform can be used up until version 1.5.7. For newer versions of an IaC platform, OpenTofu should be used.

Custom Hooks

Custom hooks are used to customize the core Terraform and OpenTofu workflow. It is a common requirement to have to run a command, script, or API call before or after the Terraform plan and/or apply events. For example, many customers run lint tests before the plan to ensure the Terraform code is formatted correctly or install software before the apply if it is needed for the Terraform code to execute correctly.

If a command is being used in the hook, nothing else is needed except typing the command into the text box, shell variables can be referenced if required.

If a script is being used, whether it is a VCS or CLI based workspace, the script must be in the directory that the workspace is pointed to. Optionally, the script can also be downloaded (i.e. wget https://script.sh | sh ). Please ensure the script has execute permissions (i.e. chmod +x <filename>).

If you are using pip to install a binary, the binary will be installed under /tmp/.local/bin.

Terraform outputs can be used as part of the input for a command or script as Terraform and the hooks are executed in the same container. Use terraform output -json to pull the output in JSON format and then use it in your script.

Custom hooks are added as part of the workspace creation or after a workspace is created by going into the workspace settings:

1166

The output of the hooks can be seen directly in the console output for the plan and apply.

Hook Examples

Import Resources

A common use case we see with the pre-plan hooks is to import resources into Terraform state. Rather than downloading the state, manipulating it, and pushing it back into Scalr, you can do all of this directly in a pre-plan hook:

terraform import aws_instance.example <Instance ID>

Pulling Plan Details

In the case where the Terraform plan might need to be exported and used externally, it can be pulled by using the command below before the plan or apply, or after apply:

terraform show -json /opt/data/terraform.tfplan.bin

Agent Pools

Want to execute the Terraform and OpenTofu run on your own infrastructure? Define which workspaces should use agent pools, which only need access to pull from scalr.io and never access from scalr.io to your network.

Agent pools are also helpful if you don't want to store cloud credentials in Scalr. Agents can inherit the permissions of the IAM role of the service account or profile that is assigned to it and pass that to the Terraform run. Agents are also helpful if you need to connect to an internal secrets management service.

Environment Type

Environment types are a pre-defined list of categories for a workspace that can be used to filter for workspaces of the same type. The possible environment types are:

  • Production
  • Staging
  • Testing
  • Development
  • Unmapped

In future releases, the environment types will be used with features like OPA, Slack/Teams, and reporting to make their options more granular.

Tags

Tagging in Scalr gives you the ability to organize your workspaces and environments even more. Commonly, you might want to add an extra layer of organization within an environment, which is where tags come into play as you can filter based on tags in the UI, API, and provider:

1579

Tags can be added to individual workspaces as well as environments. To create tags for a workspace, users must have the following permissions:

  • account:read
  • workspaces:create/update

Currently, tags are global objects and can be used across workspaces and environments.

Run Triggers

Run triggers are a way to chain workspaces together. The use case for this is that you might have one or more upstream workspaces that need to automatically kick off a downstream workspace based on a successful run in the upstream workspace. To set a trigger, go to the downstream workspace and set the upstream workspace(s). Now, whenever the upstream workspace has a successful run, the downstream workspace will automatically start a run.

If more than one (up to 50) workspace is added as the upstream, a successful run in any upstream workspace will trigger the downstream workspace run. For example, if two upstream workspaces finish at the exact same time, then the downstream workspace will have two runs queued.

The permissions required for a user to set the triggers are

  • Downstream workspace requires workspaces:update
  • Upstream workspace requires workspaces: read

If the downstream workspace has auto-apply enabled, then the apply will automatically occur once the trigger happens. If not, it will wait for approval.

Drift / Run Scheduler

The run scheduler is a way to automatically trigger recurring runs based on a defined schedule. This is commonly used by customers to determine drift in Terraform and OpenTofu state. The schedule can be set to execute a run every day, on a specific day(s), or using cron. The schedule can be created for a plan/apply that updates/creates resources, a destructive run, equivalent to terraform destroy, or a refresh-only which will check the state against the actual infrastructure. The approval of runs will depend on your workspace settings, if the setting has auto-approval set, the run will automatically apply, if not it will wait for manual confirmation before applying. All run schedules will be assigned in the UTC timezone, please convert to your time zone to ensure the runs are scheduled properly.

The most common use case, among others, for the run scheduler is to ensure the creation and destruction of development workspaces on a specific schedule to avoid unwanted costs.

Permission required: workspaces:set-schedule

Execution Mode

By default, the execution mode of a workspace is set to remote, which means the run will execute in Scalr. If you need to run it locally, the execution mode flag can be set per workspace through the Scalr UI or Terraform provider:

Provider:

resource "scalr_workspace" "infra" {
  name           = "infra"
  environment_id = "org-123456"
  execution_mode = "local"
}

See the full workspace provider docs here.

If the execution mode is set to state storage only, users will still be able to execute runs through the UI or pull requests and those will execute remotely in Scalr. The state storage only mode is designed for CLI-based workspaces.

The following features are not available when using state storage only:

  • Scalr run dashboard
  • Open Policy Agent
  • Infracost
  • Provider Configurations
  • Variables stored in Scalr

Run Timeout

By default, the run timeout for a workspace is 60 minutes. You can update this timeout to anything between 10 and 720 minutes via the Scalr UI, API, or provider. See run_operation_timeout in the provider documentation.

State Rollback

The state rollback feature allows you to easily roll back to a former state file in the event of an error or corrupt file. This will not update or change your resources unless a new Terraform or OpenTofu run is executed and a change is detected. The rollback option can be found on the page of the old state file:

Deletion Protection

By default, all workspaces have workspace deletion protection enabled when a workspace has active state. Active state is defined as a workspace that has resources under management in the state file. To delete a workspace, you can either run a destructive apply, which removes the resources from state and then disables the workspace protection, or manually disable workspace protection and delete the workspace. WARNING: If you manually disable the workspace protection and destroy the workspace, the state is not recoverable.

Force Run Option

Deployments in a Terraform and OpenTofu workflow can be highly dynamic with a high rate of change. With the force run setting, you can ensure the latest commit/run is always the one that has priority. As one of our customers stated, "the last write wins". If the force run feature is used, all pending runs that could potentially delay the latest run will be canceled and the run with the force flag will be bumped to the top of the queue. Active runs will never be canceled, only runs that are in a pending state.

In the case of a dry run, the force run option will only cancel the runs linked to the source branch.

The workspace attribute is force_latest_run. The default is false.

resource "scalr_workspace" "infra" {
  name           = "infra"
  environment_id = "org-123456"
  force_latest_run = "true"
}

See the full workspace provider docs here.

Auto Queue Runs

Being able to automate the management of Scalr is a key component for the majority of our customers. Many customers create "vending machines", which create workspaces based on a certain set of criteria and automatically onboard teams and/or apps with those workspaces. The auto-queue runs feature helps with another step in that automation by giving users the ability to automatically kickoff runs after the initial configuration files are uploaded into a workspace. It is not required to trigger the initial run through the UI or API. This is setting is controlled through the UI, API, or provider:

  • skip_first - (default) A run will not be triggered when the initial workspace is created, but subsequent runs will be triggered upon new configuration files being uploaded.
  • always - A run will be triggered as soon as the configuration files are uploaded, including the initial files.
  • never - Runs will not be triggered automatically based on configuration files being added to the workspace.
resource "scalr_workspace" "infra" {
  name           = "infra"
  environment_id = "org-123456"
  auto_queue_runs = "always"
}

See the full workspace provider docs here.

.terraformignore

To optimize the speed at which Scalr clones the repository where your Terraform config resides or to just ignore files, Scalr will accept a .terraformignore file. Any files listed in the .terraformignore will be excluded during the clone operation. This is helpful if you have a large monorepo and you only want to focus on specific folders. This can be used for OpenTofu workspaces as well.

Container Image Info

Regardless of the workspace type, Terraform and OpenTofu runs occur within a Docker container that is running on the Scalr infrastructure. The default memory limit for each container spun up is 2GB. The container is based on standard Debian Linux and has the tools below installed already. If you need to execute runs outside of the Scalr infrastructure, you can do this through Self Hosted Agent Pools.

The following tools are already installed on the image:

NameDescription
AWS CLIUsed to interact with AWS.
Azure CLIUsed to interact with Azure. See setup instructions below.
Google CLIUsed to interact with Google Cloud. See setup instructions below.
pip3Pip is the package installer for Python.
PythonThe python version installed is 3.9.2.
kubectlKubectl can be used to manage Kubernetes in your Terraform code.

AWS CLI:

Nothing is needed as the AWS CLI can read the $AWS_ACCESS_KEY_ID and $AWS_SECRET_ACCESS_KEY shell variables that Scalr passes.

Azure CLI:

az login --service-principal -u $ARM_CLIENT_ID -p $ARM_CLIENT_SECRET --tenant $ARM_TENANT_ID
az account set --subscription=$ARM_SUBSCRIPTION_ID

Google CLI:

printenv GOOGLE_CREDENTIALS > key.json
gcloud auth activate-service-account --key-file=key.json --project=$GOOGLE_PROJECT
rm -f key.json