State Management
State storage in Scalr is determined by the organization's deployment model. As mentioned in the remote backend section, there are three models:
- Scalr Remote Backend w/ Scalr State Storage (default setting)
- Scalr Remote Backend w/ Customer Managed State Storage
- Non-Scalr Backend w/ Customer State Storage
This document will break down the state storage options based on the deployment model that is being used.
State Storage Options
Scalr Backend w/ Scalr State Storage
The Scalr backend with Scalr state storage is the default setting for all new accounts. If you have not explicitly modified the state storage settings, this is what your Scalr account is using. Using this model will give your full feature functionality in Scalr, there are no limitations to be aware of.
The Terraform or OpenTofu state is stored in a Scalr-managed bucket in Google Compute Cloud in the US Central region. All state is stored with AES 256 encryption and only the Scalr application has the ability to decrypt the state.
When a run is executed in a Scalr workspace the result is the deployment of resources and then the storage of the Terraform or OpenTofu state in Scalr. Scalr will store not only the current state but all previous state files as well with the ability to download them.
The permission state-versions:read
must be granted for a user to read state.
Scalr Backend w/ Customer Managed State
With the bring your own bucket option, you have the ability to store state and other blob objects in a GCP or AWS S3 bucket. This feature can only be used on new Scalr accounts or accounts that do not have any blob objects in them yet, such as state files, runs, and logs.
To add your own bucket, you must replace the blob settings.This is not considered state backup as the state will still be stored fully encrypted, this should be used if your security or compliance team has a requirement that state must be stored in your own bucket.
The following objects will be stored in your bucket instead of in Scalr:
- State files
- Terraform/Tofu code artifacts
- Plan JSON and binaries
- Terraform/Tofu logs
The following objects that are used during runs are still stored in Scalr:
- Variables
- Provider configurations
GCP State Storage Configuration
To add a GCP bucket, you must supply the following information in the replace blob settings API call:
- GCP Account ID
- GCP service account JSON key with the IAM role
Storage Admin
assigned on agoogle-storage-bucket
- GCP encryption key (optional)
- GCP project ID
- GCP storage bucket name
Here is a sample configuration for the GCP bucket settings:
Location type: Multi-region
Default storage class: Standard
Public access: Subject to object ACLs
Access control: Fine-grained
Protection: Soft Delete
Bucket retention: None
Lifecycle rules: None
Encryption: Google-managed
Other settings can be used based on your organization policies, if you have questions about the settings please open a support ticket.
AWS S3 State Storage Configuration
First create a S3 bucket in AWS, there are no Scalr specific settings needed other than the bucket should be accessible by Scalr.io. Next, you'll need to create the OIDC authentication for Scalr to access the bucket:
- In AWS, go to IAM, then identity provider, and click add provider.
- Select OpenID Connect, and add
https://scalr.io
as the provider URL. - The audience can be any value.
Next, create a role in AWS:
- In AWS, go to IAM, then roles, and create role.
- Select web identity and select the OIDC provider you just created.
- Add the
AmazonS3FullAccess
permission. - Name the role and create it.
- Note: If you are using KMS encryption, you will need to give the role the ability to encrypt and decrypt KMS with the following:
- kms:GenerateDataKey
- kms:Decrypt
In the Scalr API, call the replace blob endpoint and enter the following values:
- Backend Type -
aws-s3
- AWS S3 Audience - The audience provided when the OIDC provider was created.
- AWS S3 Bucket Name - The name of your S3 bucket.
- AWS S3 Role ARN - The ARN of the role that was created.
Execute the API call and Scalr will now use S3 for state storage.
Non-Scalr Backend w/ Customer Managed State
If using a non-Scalr backend, your state will be stored in the backend defined in your code. The non-Scalr backend can be enabled/disabled on a per-environment basis:
Once the Scalr backend is disabled, it cannot be turned back on for the environment.
An example backend that is not Scalr would be S3. If using S3 as your backend, you likely have a code snippet similar to this defined:
terraform {
backend "s3" {
bucket = "yourbucket"
key = "path/to/key"
region = "us-east-1"
}
}
This means that the state will be stored in yourbucket
and specifically in the path/to/key
directory. It is up to the owner of the backend to encrypt the state. Scalr provider configurations can be used to authenticate to other backends, see more on that here. It is imporant to note that non-Scalr backends will have some limited fucntionality, which can be seen here.
Import/Update State
If using OpenTofu, substitute the
tofu
command forterraform
.
Note: This functionality is only available if the Scalr backend is used.
State can be imported into workspaces using the Scalr UI or API; or the Terraform/Tofu CLI. When importing state into a new workspace, no manipulation is needed as long as it is a valid state file. When importing state into an existing workspace with existing state, there is some validation with the serial in the remote state.
The serial of the local state must be at least one number higher than the serial in the remote state. For example, if the state in Scalr shows the following (serial set to 3):
{
"version": 4,
"terraform_version": "1.5.7",
"serial": 3,
...
Then ensure that the new state locally has serial set to 4 before pushing:
{
"version": 4,
"terraform_version": "1.5.7",
"serial": 4,
...
If not, you will see the following error: “Failed to write state: cannot overwrite existing state with serial 1 with a different state that has the same serial.”
Scalr UI
Note: This functionality is only available if the Scalr backend is used.
To import state, navigate to the state tab within a workspace and click on "Upload state file":
Terraform Push
Note: This functionality is only available if the Scalr backend is used.
State can be updated and manipulated easily in Scalr as Scalr can be added as a remote backend to local workspaces. Scalr supports all of the standard Terraform or OpenTofu open-source commands as long as the remote backend is added to your config.
First, get the API token for Scalr by running the following:
terraform login <account-name>.scalr.io
Next, make sure you have Scalr added as the backend for the local workspace:
terraform {
backend "remote" {
hostname = "<my-account>.scalr.io"
organization = "<ID of environment>"
workspaces {
name = "<workspace-name>"
}
}
}
Next, you can pull the state with terraform state pull> terraform.tfstate
Now that you have the state locally, you can make updates to it and then push it back into Scalr with terraform state push terraform.tfstate
In some scenarios, you must ensure that the serial of the state is one number higher than the previous serial that was in the Scalr workspace. You will know you have an issue if you see the error:cannot overwrite existing state with serial 1 with a different state that has the same serial
This can be fixed by updating the serial in the actual state file before pushing:
{
"version": 2,
"terraform_version": "1.0.0",
"serial": 2,
...
Import
Resources can also be imported into the existing Terraform state in Scalr by running the standard import command: terraform import resource.scalr <resource-id>
. There is a caveat in that the credentials and secrets need to be stored locally to do this with the Terraform CLI.
If you do want to use the credentials and variables stored in Scalr, then a pre-plan hook can be used to execute the import
command. The steps to do this are:
- prepare a script with the import commands:
terraform import <address-1> <resource-id-1>
terraform import <address-2> <resource-id-2>
terraform import <address-3> <resource-id-3>
- Add a pre-plan hook to execute the script
- Trigger a
terraform plan
- After the plan is finished, delete the hook as this is a one time job.
State Backup/Export
For organizations that want to have a copy of the state stored locally, this can be done using the Terraform/OpenTofu CLI or Scalr API to export the state.
If using the Terraform CLI, you can simply run a terraform state pull> terraform.tfstate
to pull it down locally.
If using the Scalr API, most users do this in two ways:
- Run a job to do a bulk export on a recurring schedule.
- Export the state using the post-apply custom hook after a successful apply.
In either case, this API can be called to get a state file based on the version ID or this API call can be called to pull the current state for a workspace.
We have also created a script that you can use to do the backup: https://github.com/Scalr/scalr-state-backup
Rollback
Note: This functionality is only available if the Scalr backend is used.
The state rollback feature allows you to easily rollback to a previous state file in the event of an error or corrupt file. This will not update or change your resources unless a new Terraform run is executed and a change is detected. The rollback option can be found on the page of the old state file:
Sharing State
Sharing State Between Environments
State sharing between environments is disabled by default. To share the state between environments please set a shell variable
SCALR_RUNNER_ROLE
with the valuerunner-account-access
at the account scope.
It is common practice to reference outputs from other workspaces so that a Terraform configuration can make use of resources that have been deployed in another workspace. This is known as “remote state” and accessing the remote state is done using the terraform_remote_state
data source as shown in this example.
data "terraform_remote_state" "state-1" {
backend = "remote"
config = {
hostname = "<host>"
organization = "<env_id>"
workspaces = {
name = "<workspace name>"
}
}
}
In Scalr, a workspace can reference the outputs of any other workspace in any environment if SCALR_RUNNER_ROLE=runner-account-access
has been set as a shell variable at the account scope. Alternatively, if SCALR_RUNNER_ROLE
is not set at the account scope, then workspaces will only have access to the remote state files within the same environment.
Sharing Within an Environment
Scalr workspaces have the option to limit which workspaces can access the state. Workspace owners can allow the following sharing options:
- All workspaces in an environment
- No workspaces (only accessed by the current workspace)
- Some workspaces
When some workspace is selected, users will be prompted to select the workspaces that state can be shared with:
If the setting is updated to "No workspaces" or "Some workspaces", the state will not be able to be shared across environments even if the SCALR_RUNNER_ROLE
variable is set.
Remote State Code Snippet
Note: This functionality is only available is the Scalr backend is used.
To obtain the remote state code snippet for a workspace, go to the workspace dashboard and click on Output Usage. A code snippet similar to the one above will be provided which can then be pasted into your Terraform code.
Updated about 2 months ago