Create a Workspace

A workspace can be created through the

Workspace Options

Once you have figured out which workspace is right for you, there are a number of settings that need to be selected during the creation of a workspace:

VCS Workspace

Set Up

Connect a VCS repository if you haven't already. Set the following options:

  • VCS Provider - Select the provider that contains the Terraform repositories.
  • Repository - Select the repository where the code resides.
  • Branch - Select the branch that Scalr should trigger a run off of when a PR is opened or merged.
  • Terraform Working Directory - This is where Terraform actually runs. This directory must be a subdirectory of the top level of the repository or of the subdirectory if specified. This comes in handy in the case of a mono repo.
  • Enable VCS-driven dry runs - This is a control mechanism to avoid unwanted dry runs on every commit.
  • Clone submodules - This allows you to specify whether git submodules should be fetched when cloning a VCS repository.
  • Skipping specific commits - If you would prefer that a run is not started based on a VCS commit or PR, add [skip ci] in the first line of a head commit message. Users will still be able to queue runs manually if the configuration version is associated with a commit with the skip tag. Use [skip scalr ci] to avoid conflict with other CI tools that use the same message.

There are other optional settings, which can be found in workspace settings

Execution

A run will execute upon the next commit, pull request, or manual execution of a run. Alternatively, you can set auto-queue run to always, which will automatically start a run upon the workspace being created. Auto queuing of runs is helpful when you automatically create workspaces through the Scalr provider as you can kick off a run in the workspace as soon as it is provisioned via Terraform without manual intervention.

CLI Workspace

Set Up

Obtain a Scalr token from the UI -> Profile Settings or by running terraform login <account-name>.scalr.io. When executing the login command, Scalr will automatically create the credentials and store them in the credentials.tfrc.json locally.

Set the following options:

  • For a CLI-based workspace, you simply have to choose the working directory if one is needed.
  • Add Scalr as the remote backend. The environment ID or name can be used as the organization. If the environment name has a space in it, the ID must be used:
terraform {
  backend "remote" {
    hostname = "<account-name>.scalr.io"
    organization = "<scalr-environment-name>"

    workspaces {
      name = "<workspace-name>"
    }
  }
}

There are other optional settings, which can be found in workspace settings

Execution

Once the set up is complete, run terraform init to connect to the workspace in the Scalr remote backend. From this point forward the standard Terraform OSS commands will work as expected.

If there is an existing state file in the local system or state that was previously stored in another remote backend, then the terraform init command will automatically migrate the state to Scalr. See Migrating to Scalr for more details.

❗️

Version Mismatch

If the workspace is pre-created manually in Scalr and the Terraform version of the workspace does not match the version of the CLI then the following error will be displayed:

Error reading local state: state snapshot was created by Terraform vx.x.x, which is newer than current vx.x.x;.

If you see this error, please ensure the Terraform version of the CLI matches the Terraform version of the workspace.

No-Code (Module Sourced)

A no-code workspace can be deployed directly from the module registry, which will pre-define the module and version when you get to the workspace creation page or by going to the create workspace page and setting the source to "module".

Set Up

Add a module to the module registry if you haven't already. Set the following options:

  • Module - Select the module which should be deployed into the workspace.
  • Module Version - The version of the module that will be deployed. The module versions can be controlled.

There are other optional settings, which can be found in workspace settings

Execution

Upon creating the workspace you will be redirected to the workspace dashboard. If variables require input you will be prompted to fill in the variables. If not, you can manually queue a run. Alternatively, you can set auto-queue run to always, which will automatically start a run upon the workspace being created.

Set Terraform Variables

It is best practice to create Terraform code in a reusable manner, which is where variable files help. Scalr will automatically pull in the Terraform variables files for VCS and CLI workspaces by entering the path to the file in the workspace settings. The variable file location is absolute to the repository root and is not relative to a working directory of a workspace.

If the local workspace contains any *.auto.tfvars files these will provide default variable values that Terraform will automatically use.

If variables in the *.auto.tfvars files have the same names as variables specified in the workspace, the predefined workspace values will be used. For map variables, the values in *.auto.tfvars are merged with values in the same named variable in the workspace.

Set Shell Variables

Shell variables can also be set if the Terraform configuration utilizes variables (export var=value) for things like setting parallelism (TF_CLI_ARGS_plan="-parallelism=N"), log levels (TF_LOG=TRACE), and more.

Shell variables can be set at all levels in Scalr and are inherited by lower levels. Use environment or account level for shell variables that are needed in multiple workspaces or environments. Use workspace level for shell variables that are specific to individual Terraform configurations.

It is also possible to use the Scalr provider to pull output from one workspace and post it as an environment or account shell variable to make it easily available to all other workspaces.


What’s Next

Check out all of the remaining optional workspace settings next.