Skip to main content
Version: 0.1.0

Diagram

Backup execution path

A schedule or manual trigger launches a stateless Job. The Job authenticates to OpenBao, streams the Raft snapshot directly, and uploads it to object storage without sending snapshot bytes through the controller.

Decision matrix

Choose the backup auth path

Choose the backup auth path.
PathUse it whenOperator behaviorWatch for
Static tokenJWT auth is not available yet and you need a compatibility path.The backup Job reads a long-lived token from a Secret in the cluster namespace.This is a legacy path. Treat the token as a sensitive credential and rotate it deliberately.

Prerequisites

  • Provision a bucket or container in a supported provider:
    • S3 or S3-compatible storage such as MinIO or Ceph
    • Google Cloud Storage
    • Azure Blob Storage
  • Grant the backup identity write access to that storage location.
  • Allow egress to the storage endpoint. This is required for the Hardened profile.
  • Decide whether the backup and restore Jobs will use a Secret, explicit workload identity metadata, or provider-default credentials.
Separate identity surfaces

The main OpenBao Pods and backup Jobs use different ServiceAccounts. Cloud KMS unseal identity on the main workload does not automatically apply to backup or restore Jobs. Check CloudUnsealIdentityReady for the main Pods and BackupConfigurationReady for the generated backup Job identity path.

First successful backup path

Decision matrix

Use this order the first time you wire backups

Use this order the first time you wire backups.
StepWhat to doWhat proves success
2. Configure storage targetChoose S3, GCS, or Azure and make the credentials or workload identity path explicit.The cluster spec contains a complete spec.backup.target and the referenced Secret or workload identity metadata already exists.
3. Wait for backup readinessApply the updated OpenBaoCluster and check status before assuming the CronJob can run.BackupConfigurationReady=True and no storage or identity validation failures remain.
4. Force one manual runTrigger a backup from the generated CronJob before the first upgrade window.A real snapshot lands in object storage and status.backup.lastSuccessfulBackup advances.
For most first-time production users

The cleanest first backup path is:

  1. enable spec.selfInit.oidc.enabled: true
  2. configure spec.backup.target
  3. wait for BackupConfigurationReady=True
  4. trigger one manual backup and confirm the object exists in storage

Configure backup auth and storage

Use JWT auth when you want automatic token rotation and the cleanest separation between the cluster workload and backup jobs.

Automated setup

When spec.selfInit.oidc.enabled is true, the operator automatically configures:

  1. the JWT auth method (auth/jwt-operator)
  2. OIDC discovery
  3. the backup policy (openbao-operator-backup)
  4. the backup role (openbao-operator-backup)

No manual OpenBao auth configuration is required.

Configure

Enable OIDC bootstrap for automatic backup auth

yaml

spec:
selfInit:
enabled: true
oidc:
enabled: true
JWT audience

The backup Job uses the audience from OPENBAO_JWT_AUDIENCE (default: openbao-internal). Set the same value in the OpenBao role bound_audiences and pass the env var to the operator through controller.extraEnv and provisioner.extraEnv in Helm.

Configure

Configure S3 or S3-compatible storage

yaml

apiVersion: openbao.org/v1alpha1
kind: OpenBaoCluster
metadata:
name: backup-cluster
spec:
backup:
schedule: "0 3 * * *"
target:
provider: s3
endpoint: "https://s3.amazonaws.com"
bucket: "openbao-backups"
region: "us-east-1"
pathPrefix: "clusters/backup-cluster"
usePathStyle: false
# Optional explicit web identity path:
# roleArn: "arn:aws:iam::123456789012:role/openbao-backup"
# Optional provider metadata for the generated ServiceAccount:
# workloadIdentity:
# serviceAccountAnnotations:
# eks.amazonaws.com/role-arn: "arn:aws:iam::123456789012:role/openbao-backup"
credentialsSecretRef:
name: s3-credentials
S3 credentials

Create a Secret with these keys when you are not using provider-default identity:

  • accessKeyId
  • secretAccessKey
  • sessionToken (optional)
  • region (optional)
  • caCert (optional)

You can also omit credentialsSecretRef and rely on:

  • roleArn for the operator-managed web identity flow
  • ambient workload identity or default credentials
  • workloadIdentity.serviceAccountAnnotations when your platform integration is driven by ServiceAccount metadata

Minimal working example

Configure

Use a minimal JWT-backed S3 backup baseline

yaml

apiVersion: openbao.org/v1alpha1
kind: OpenBaoCluster
metadata:
name: my-cluster
namespace: openbao-prod
spec:
selfInit:
enabled: true
oidc:
enabled: true
backup:
schedule: "0 3 * * *"
target:
provider: s3
endpoint: "https://s3.amazonaws.com"
bucket: "openbao-backups"
region: "us-east-1"
pathPrefix: "clusters/my-cluster"
credentialsSecretRef:
name: s3-credentials

This is the smallest supported production-oriented starting point. The namespace must already contain the referenced Secret, and the backup Job still needs network egress to the object storage endpoint.

Advanced backup settings

Provider-specific options

Reference table

S3-specific options

S3-specific options.
OptionDefaultWhat it changes
usePathStylefalseSwitch to path-style addressing for MinIO and some S3-compatible endpoints.
roleArnnoneEnables the explicit AWS web identity path managed by the operator.
pathPrefixcluster-scoped defaultControls the object prefix used for backup keys so clusters stay separated inside a shared bucket.

Configure

Set S3 provider-specific options

yaml

spec:
backup:
target:
provider: s3
region: "eu-west-1"
usePathStyle: true
roleArn: "arn:aws:iam::123456789012:role/backup-role"
pathPrefix: "clusters/prod-a"

Workload identity metadata

Use target.workloadIdentity when your cloud identity integration depends on ServiceAccount annotations or pod labels on the generated backup and restore workloads.

Configure

Attach identity metadata to backup and restore workloads

yaml

spec:
backup:
target:
workloadIdentity:
serviceAccountAnnotations:
iam.gke.io/gcp-service-account: "backup@my-project.iam.gserviceaccount.com"
azure.workload.identity/client-id: "00000000-0000-0000-0000-000000000000"
podLabels:
azure.workload.identity/use: "true"

serviceAccountAnnotations are applied to the generated backup and restore ServiceAccounts. podLabels are applied to backup and restore Job pods without replacing operator-managed labels.

Emulator support

GCS and Azure support custom endpoints for local testing with fake-gcs-server and Azurite. When those endpoints use self-signed certificates, include the CA certificate in the credentials Secret.

Retention policy

Retention cleanup runs after a successful backup and works across S3, GCS, and Azure.

Configure

Keep a limited number of recent snapshots

yaml

spec:
backup:
retention:
maxCount: 7
maxAge: "168h"

Performance tuning

Reference table

Multipart upload tuning

Multipart upload tuning.
ParameterDefaultWhen to change it
concurrency3Increase for throughput, or reduce it when memory pressure or object-store throttling becomes the limiting factor.

Configure

Tune upload chunking and parallelism

yaml

spec:
backup:
target:
partSize: 20971520
concurrency: 5

Pre-upgrade snapshots

Take a snapshot immediately before the rolling update or blue-green cutover begins.

Configure

Require a snapshot before upgrades start

yaml

spec:
upgrade:
preUpgradeSnapshot: true
backup:
target: { ... }
Upgrade safety

preUpgradeSnapshot: true only works when spec.backup.target is already configured. Confirm backup status before you start the upgrade rather than assuming the pre-upgrade snapshot can be taken on demand.

Verify and operate

Verify

Check backup readiness before you wait for the schedule

bash

kubectl get openbaocluster my-cluster -n <namespace> \
-o jsonpath='{range .status.conditions[*]}{.type}={.status}{"\n"}{end}'

Confirm BackupConfigurationReady=True before you rely on the schedule or trigger a manual run.

Inspect

Inspect backup status on the cluster

bash

kubectl get openbaocluster my-cluster \
-o jsonpath='{.status.backup}'

Check lastSuccessfulBackup, nextScheduledBackup, and failure counters before you rely on the policy as a recovery control.

Apply

Trigger a manual backup from the generated CronJob

bash

kubectl create job --from=cronjob/my-cluster-backup manual-backup-1

Use a manual run to prove the full path: identity, cluster auth, storage reachability, and object naming before the first production upgrade.

Official OpenBao background

Next operating steps

Published release documentation

You are reading docs for version 0.1.0. Use the version menu to switch to next or another archived release.

Was this page helpful?

Use Needs work to open a structured GitHub issue for this page. The Yes button only acknowledges the signal locally.