State Management

When a run is executed in a Scalr workspace the result is the deployment of resources and then the storage of the Terraform or OpenTofu state in Scalr. Scalr will store not only the current state but all previous state files as well with the ability to download them.

The permission state-versions:read must be granted for a user to read state.

State Storage

The Terraform or OpenTofu state is stored in a Scalr-managed bucket in Google Compute Cloud in the US Central region. All state is stored with AES 256 encryption and only the Scalr application has the ability to decrypt the state. You do have the option to use your own Google Cloud bucket for storage as well. Other cloud providers will be added soon.

Customer Managed Bucket

With the bring your own bucket option, you have the ability to store state and other blob objects in a GCP or AWS S3 bucket. This feature can only be used on new Scalr accounts or accounts that do not have any blob objects in them yet, such as state files, runs, and logs.

To add your own bucket, you must replace the blob settings.This is not considered state backup as the state will still be stored fully encrypted, this should be used if your security or compliance team has a requirement that state must be stored in your own bucket.

GCP State Storage Configuration

To add a GCP bucket, you must supply the following information in the replace blob settings API call:

  • GCP Account ID
  • GCP service account JSON key with the IAM role Storage Admin assigned on a google-storage-bucket
  • GCP encryption key (optional)
  • GCP project ID
  • GCP storage bucket name

Here is a sample configuration for the GCP bucket settings:

Location type:         Multi-region
Default storage class: Standard
Public access:         Subject to object ACLs
Access control:        Fine-grained
Protection:            Soft Delete
Bucket retention:      None
Lifecycle rules:       None
Encryption:            Google-managed

Other settings can be used based on your organization policies, if you have questions about the settings please open a support ticket.

AWS S3 State Storage Configuration

First create a S3 bucket in AWS, there are no Scalr specific settings needed other than the bucket should be accessible by Scalr.io. Next, you'll need to create the OIDC authentication for Scalr to access the bucket:

  • In AWS, go to IAM, then identity provider, and click add provider.
  • Select OpenID Connect, and add https://scalr.io as the provider URL.
  • The audience can be any value.

Next, create a role in AWS:

  • In AWS, go to IAM, then roles, and create role.
  • Select web identity and select the OIDC provider you just created.
  • Add the AmazonS3FullAccess permission.
  • Name the role and create it.
  • Note: If you are using KMS encryption, you will need to give the role the ability to encrypt and decrypt KMS with the following:
    • kms:GenerateDataKey
    • kms:Decrypt

In the Scalr API, call the replace blob endpoint and enter the following values:

  • Backend Type - aws-s3
  • AWS S3 Audience - The audience provided when the OIDC provider was created.
  • AWS S3 Bucket Name - The name of your S3 bucket.
  • AWS S3 Role ARN - The ARN of the role that was created.

Execute the API call and Scalr will now use S3 for state storage.

Rollback

The state rollback feature allows you to easily rollback to a previous state file in the event of an error or corrupt file. This will not update or change your resources unless a new Terraform run is executed and a change is detected. The rollback option can be found on the page of the old state file:

Update State

📘

If using OpenTofu, substitute the tofu command for terraform.

Push

State can be updated and manipulated easily in Scalr as Scalr can be added as a remote backend to local workspaces. Scalr supports all of the standard Terraform or OpenTofu open-source commands as long as the remote backend is added to your config.

First, get the API token for Scalr by running the following:

terraform login <account-name>.scalr.io

Next, make sure you have Scalr added as the backend for the local workspace:

terraform {
  backend "remote" {
    hostname = "<my-account>.scalr.io"
    organization = "<ID of environment>"
    workspaces {
      name = "<workspace-name>"
    }
  }
}

Next, you can pull the state with terraform state pull> terraform.tfstate

Now that you have the state locally, you can make updates to it and then push it back into Scalr with terraform state push terraform.tfstate

In some scenarios, you must ensure that the serial of the state is one number higher than the previous serial that was in the Scalr workspace. You will know you have an issue if you see the error:cannot overwrite existing state with serial 1 with a different state that has the same serial

This can be fixed by updating the serial in the actual state file before pushing:

{
  "version": 2,
  "terraform_version": "1.0.0",
  "serial": 2,
...

Import

Resources can also be imported into the existing Terraform state in Scalr by running the standard import command: terraform import resource.scalr <resource-id>. There is a caveat in that the credentials and secrets need to be stored locally to do this with the Terraform CLI.

If you do want to use the credentials and variables stored in Scalr, then a pre-plan hook can be used to execute the import command. The steps to do this are:

  • prepare a script with the import commands:
terraform import <address-1> <resource-id-1>  
terraform import <address-2> <resource-id-2>  
terraform import <address-3> <resource-id-3>
  • Add a pre-plan hook to execute the script
  • Trigger a terraform plan
  • After the plan is finished, delete the hook as this is a one time job.

State Backup/Export

For organizations that want to have a copy of the state stored locally, this can be done using the Terraform/OpenTofu CLI or Scalr API to export the state.

If using the Terraform CLI, you can simply run a terraform state pull> terraform.tfstate to pull it down locally.

If using the Scalr API, most users do this in two ways:

  • Run a job to do a bulk export on a recurring schedule.
  • Export the state using the post-apply custom hook after a successful apply.

In either case, this API can be called to get a state file based on the version ID or this API call can be called to pull the current state for a workspace.

We have also created a script that you can use to do the backup: https://github.com/Scalr/scalr-state-backup