Skip to main content
Version: 0.1.0-rc.5

Diagram

Backup execution path

A schedule or manual trigger launches a stateless Job. The Job authenticates to OpenBao, streams the Raft snapshot directly, and uploads it to object storage without sending snapshot bytes through the controller.

Decision matrix

Choose the backup auth path

Choose the backup auth path.
PathUse it whenOperator behaviorWatch for
Static tokenJWT auth is not available yet and you need a compatibility path.The backup Job reads a long-lived token from a Secret in the cluster namespace.This is a legacy path. Treat the token as a sensitive credential and rotate it deliberately.

Prerequisites

  • Provision a bucket or container in a supported provider:
    • S3 or S3-compatible storage such as MinIO or Ceph
    • Google Cloud Storage
    • Azure Blob Storage
  • Grant the backup identity write access to that storage location.
  • Allow egress to the storage endpoint. This is required for the Hardened profile.
  • Decide whether the backup and restore Jobs will use a Secret, explicit workload identity metadata, or provider-default credentials.
Separate identity surfaces

The main OpenBao Pods and backup Jobs use different ServiceAccounts. Cloud KMS unseal identity on the main workload does not automatically apply to backup or restore Jobs. Check CloudUnsealIdentityReady for the main Pods and BackupConfigurationReady for the generated backup Job identity path.

Configure backup auth and storage

Use JWT auth when you want automatic token rotation and the cleanest separation between the cluster workload and backup jobs.

Automated setup

When spec.selfInit.oidc.enabled is true, the operator automatically configures:

  1. the JWT auth method (auth/jwt-operator)
  2. OIDC discovery
  3. the backup policy (openbao-operator-backup)
  4. the backup role (openbao-operator-backup)

No manual OpenBao auth configuration is required.

Configure

Enable OIDC bootstrap for automatic backup auth

yaml

spec:
selfInit:
enabled: true
oidc:
enabled: true
JWT audience

The backup Job uses the audience from OPENBAO_JWT_AUDIENCE (default: openbao-internal). Set the same value in the OpenBao role bound_audiences and pass the env var to the operator through controller.extraEnv and provisioner.extraEnv in Helm.

Configure

Configure S3 or S3-compatible storage

yaml

apiVersion: openbao.org/v1alpha1
kind: OpenBaoCluster
metadata:
name: backup-cluster
spec:
backup:
schedule: "0 3 * * *"
target:
provider: s3
endpoint: "https://s3.amazonaws.com"
bucket: "openbao-backups"
region: "us-east-1"
pathPrefix: "clusters/backup-cluster"
usePathStyle: false
# Optional explicit web identity path:
# roleArn: "arn:aws:iam::123456789012:role/openbao-backup"
# Optional provider metadata for the generated ServiceAccount:
# workloadIdentity:
# serviceAccountAnnotations:
# eks.amazonaws.com/role-arn: "arn:aws:iam::123456789012:role/openbao-backup"
credentialsSecretRef:
name: s3-credentials
S3 credentials

Create a Secret with these keys when you are not using provider-default identity:

  • accessKeyId
  • secretAccessKey
  • sessionToken (optional)
  • region (optional)
  • caCert (optional)

You can also omit credentialsSecretRef and rely on:

  • roleArn for the operator-managed web identity flow
  • ambient workload identity or default credentials
  • workloadIdentity.serviceAccountAnnotations when your platform integration is driven by ServiceAccount metadata

Advanced backup settings

Provider-specific options

Reference table

S3-specific options

S3-specific options.
OptionDefaultWhat it changes
usePathStylefalseSwitch to path-style addressing for MinIO and some S3-compatible endpoints.
roleArnnoneEnables the explicit AWS web identity path managed by the operator.
pathPrefixcluster-scoped defaultControls the object prefix used for backup keys so clusters stay separated inside a shared bucket.

Configure

Set S3 provider-specific options

yaml

spec:
backup:
target:
provider: s3
region: "eu-west-1"
usePathStyle: true
roleArn: "arn:aws:iam::123456789012:role/backup-role"
pathPrefix: "clusters/prod-a"

Workload identity metadata

Use target.workloadIdentity when your cloud identity integration depends on ServiceAccount annotations or pod labels on the generated backup and restore workloads.

Configure

Attach identity metadata to backup and restore workloads

yaml

spec:
backup:
target:
workloadIdentity:
serviceAccountAnnotations:
iam.gke.io/gcp-service-account: "backup@my-project.iam.gserviceaccount.com"
azure.workload.identity/client-id: "00000000-0000-0000-0000-000000000000"
podLabels:
azure.workload.identity/use: "true"

serviceAccountAnnotations are applied to the generated backup and restore ServiceAccounts. podLabels are applied to backup and restore Job pods without replacing operator-managed labels.

Emulator support

GCS and Azure support custom endpoints for local testing with fake-gcs-server and Azurite. When those endpoints use self-signed certificates, include the CA certificate in the credentials Secret.

Retention policy

Retention cleanup runs after a successful backup and works across S3, GCS, and Azure.

Configure

Keep a limited number of recent snapshots

yaml

spec:
backup:
retention:
maxCount: 7
maxAge: "168h"

Performance tuning

Reference table

Multipart upload tuning

Multipart upload tuning.
ParameterDefaultWhen to change it
concurrency3Increase for throughput, or reduce it when memory pressure or object-store throttling becomes the limiting factor.

Configure

Tune upload chunking and parallelism

yaml

spec:
backup:
target:
partSize: 20971520
concurrency: 5

Pre-upgrade snapshots

Take a snapshot immediately before the rolling update or blue-green cutover begins.

Configure

Require a snapshot before upgrades start

yaml

spec:
upgrade:
preUpgradeSnapshot: true
backup:
target: { ... }
Upgrade safety

preUpgradeSnapshot: true only works when spec.backup.target is already configured. Confirm backup status before you start the upgrade rather than assuming the pre-upgrade snapshot can be taken on demand.

Verify and operate

Inspect

Inspect backup status on the cluster

bash

kubectl get openbaocluster my-cluster \
-o jsonpath='{.status.backup}'

Check lastSuccessfulBackup, nextScheduledBackup, and failure counters before you rely on the policy as a recovery control.

Apply

Trigger a manual backup from the generated CronJob

bash

kubectl create job --from=cronjob/my-cluster-backup manual-backup-1

Use a manual run to prove the full path: identity, cluster auth, storage reachability, and object naming before the first production upgrade.

Official OpenBao background

Next operating steps

Prerelease documentation

This version tracks a prerelease build. Features and behavior may change before the next stable release.

Was this page helpful?

Use Needs work to open a structured GitHub issue for this page. The Yes button only acknowledges the signal locally.