Troubleshooting

Performance issues

If you're experiencing performance issues such as slow runs, out-of-memory errors, or out-of-disk-space errors, make sure that at least high-level monitoring is configured for your agents and that they have sufficient basic resources (CPU, memory, and storage).

Learn more in the requirements section.

You can also configure OpenTelemetry Metrics and/or Tracing to gain detailed service performance insights and identify bottlenecks in your On-Prem setup.

Internal errors

If you encounter internal system errors or unexpected behavior, please open a Scalr Support request at Scalr Support Center.

Before doing so, enable debug logs using the SCALR_AGENT_DEBUG option. Then collect the debug-level application logs covering the time window when the incident occurred, and attach them to your support ticket.

Learn more about the Scalr Agent logging system.

Collect logs from Docker runtime

For Docker deployments, use the docker logs command to capture logs (replace scalr-agent with the actual container name if needed):

docker logs scalr-agent > scalr-agent-log.txt

Collect logs from Kubernetes runtime

For Kubernetes deployments, the preferred way is to use kubectl logs to archive all logs from the Scalr agent namespace in a single bundle. Replace the ns variable with the name of your Helm release namespace and run:

ns="scalr-agent"
mkdir -p logs && for pod in $(kubectl get pods -n $ns -o name); do kubectl logs -n $ns $pod > "logs/${pod##*/}.log"; done && zip -r scalr-agent-logs.zip logs && rm -rf logs

This command generates a ZIP bundle named scalr-agent-logs.zip containing logs from all pods in the namespace. Attach it to your support request.

It's best to pull the logs immediately after an incident, since this command will not retrieve logs from restarted or terminated pods.

🚧

This method requires the kubectl and zip commands, with sufficient permissions to read pod logs from the agent release namespace.

Learn more about the Kubernetes command-line tool.