Backups¶
The Operator provides a robust, Kubernetes-native backup system that streams Raft snapshots directly to object storage.
Note: For restore procedures, see Restore from Backup.
Architecture¶
Backups run as transient Kubernetes Jobs, triggered by a Cron schedule or manually.
flowchart LR
Cron[Cron Schedule] -->|Triggers| Job[Backup Job]
Job -->|Auths via JWT| Bao[OpenBao Cluster]
Bao -->|Streams Snapshot| Job
Job -->|Uploads| Storage[Object Storage]
classDef write fill:transparent,stroke:#22c55e,stroke-width:2px,color:#fff;
classDef read fill:transparent,stroke:#60a5fa,stroke-width:2px,color:#fff;
classDef process fill:transparent,stroke:#9333ea,stroke-width:2px,color:#fff;
class Job process;
class Bao read;
class Storage write;
Prerequisites¶
- Object Storage: Configured bucket or container in one of the supported providers:
- S3: AWS S3 or S3-compatible storage (MinIO, Ceph, etc.)
- GCS: Google Cloud Storage bucket
- Azure: Azure Blob Storage container
- Credentials: Write permissions for the bucket/container
- Network: Egress allowed to the storage endpoint (Critical for
Hardenedprofile)
Configuration¶
Select your authentication method. JWT Auth is recommended for security (auto-rotating tokens).
This method uses a projected ServiceAccount token to authenticate with OpenBao.
One-time Setup: Configure JWT Auth
Before creating the cluster, ensure OpenBao is configured to accept the Kubernetes JWT token.
Using Self-Init:
spec:
selfInit:
requests:
# 1. Enable JWT Auth
- name: enable-jwt-auth
operation: update
path: sys/auth/jwt
authMethod: { type: jwt }
# 2. Configure JWT Validation
- name: configure-jwt-auth
operation: update
path: auth/jwt/config
data:
bound_issuer: "https://kubernetes.default.svc"
jwt_validation_pubkeys: ["<K8S_JWT_PUBLIC_KEY>"]
# 3. Create Backup Policy
- name: create-backup-policy
operation: update
path: sys/policies/acl/backup
policy:
policy: |
path "sys/storage/raft/snapshot" { capabilities = ["read"] }
# 4. Create Role bound to ServiceAccount
- name: create-backup-jwt-role
operation: update
path: auth/jwt/role/backup
data:
role_type: jwt
bound_audiences: ["openbao-internal"]
bound_claims:
kubernetes.io/namespace: openbao
kubernetes.io/serviceaccount/name: backup-cluster-backup-serviceaccount
token_policies: backup
ttl: 1h
JWT audience
The backup Job uses the audience from OPENBAO_JWT_AUDIENCE (default: openbao-internal).
Set the same value in the OpenBao role bound_audiences and pass the env var to the operator
(controller.extraEnv and provisioner.extraEnv in Helm).
Cluster Configuration:
Select your storage provider:
apiVersion: openbao.org/v1alpha1
kind: OpenBaoCluster
metadata:
name: backup-cluster
spec:
backup:
schedule: "0 3 * * *" # Daily at 3 AM
executorImage: "openbao/backup-executor:v0.1.0"
jwtAuthRole: backup # Matches the role name above
target:
provider: s3 # Default, can be omitted
endpoint: "https://s3.amazonaws.com"
bucket: "openbao-backups"
region: "us-east-1"
pathPrefix: "clusters/backup-cluster"
usePathStyle: false # Set true for MinIO/S3-compatible
credentialsSecretRef:
name: s3-credentials
S3 Credentials Secret
Create a Secret with keys:
- accessKeyId: AWS access key ID
- secretAccessKey: AWS secret access key
- sessionToken: (optional) Temporary session token
- region: (optional) Override region
- caCert: (optional) Custom CA certificate for self-signed endpoints
apiVersion: openbao.org/v1alpha1
kind: OpenBaoCluster
metadata:
name: backup-cluster
spec:
backup:
schedule: "0 3 * * *"
executorImage: "openbao/backup-executor:v0.1.0"
jwtAuthRole: backup
target:
provider: gcs
bucket: "openbao-backups"
pathPrefix: "clusters/backup-cluster"
gcs:
project: "my-gcp-project" # Optional if using ADC
credentialsSecretRef:
name: gcs-credentials
GCS Credentials
Option 1: Service Account Key (Recommended)
Create a Secret with key credentials.json containing the service account JSON key:
kubectl create secret generic gcs-credentials \
--from-file=credentials.json=/path/to/service-account-key.json
Option 2: Application Default Credentials (ADC)
If running on GKE or with Workload Identity, omit credentialsSecretRef to use ADC.
apiVersion: openbao.org/v1alpha1
kind: OpenBaoCluster
metadata:
name: backup-cluster
spec:
backup:
schedule: "0 3 * * *"
executorImage: "openbao/backup-executor:v0.1.0"
jwtAuthRole: backup
target:
provider: azure
bucket: "openbao-backups" # Container name
pathPrefix: "clusters/backup-cluster"
azure:
storageAccount: "mystorageaccount"
container: "openbao-backups" # Optional, uses bucket if omitted
credentialsSecretRef:
name: azure-credentials
Azure Credentials Secret
Create a Secret with one of the following:
- accountKey: Storage account access key
- connectionString: Full Azure connection string
For managed identity (AKS), omit credentialsSecretRef and ensure the pod identity is configured.
This method uses a static OpenBao token stored in a Kubernetes Secret.
Same-Namespace Requirement
All secret references must exist in the same namespace as the OpenBaoCluster. Cross-namespace references are not allowed for security reasons.
Prerequisite: Create Token Secret
- Generate a generic token in OpenBao with snapshot read permissions.
- Store it in a Secret:
Cluster Configuration:
apiVersion: openbao.org/v1alpha1
kind: OpenBaoCluster
metadata:
name: backup-cluster
spec:
backup:
schedule: "0 3 * * *"
executorImage: "openbao/backup-executor:v0.1.0"
tokenSecretRef:
name: backup-token
target:
provider: s3
endpoint: "https://s3.amazonaws.com"
bucket: "openbao-backups"
region: "us-east-1"
credentialsSecretRef:
name: s3-credentials
apiVersion: openbao.org/v1alpha1
kind: OpenBaoCluster
metadata:
name: backup-cluster
spec:
backup:
schedule: "0 3 * * *"
executorImage: "openbao/backup-executor:v0.1.0"
tokenSecretRef:
name: backup-token
target:
provider: gcs
bucket: "openbao-backups"
gcs:
project: "my-gcp-project"
credentialsSecretRef:
name: gcs-credentials
apiVersion: openbao.org/v1alpha1
kind: OpenBaoCluster
metadata:
name: backup-cluster
spec:
backup:
schedule: "0 3 * * *"
executorImage: "openbao/backup-executor:v0.1.0"
tokenSecretRef:
name: backup-token
target:
provider: azure
bucket: "openbao-backups"
azure:
storageAccount: "mystorageaccount"
credentialsSecretRef:
name: azure-credentials
Advanced Configuration¶
Provider-Specific Options¶
| Option | Default | Description |
|---|---|---|
region |
us-east-1 |
AWS region or any value for S3-compatible stores |
usePathStyle |
false |
Set true for MinIO and S3-compatible stores |
roleArn |
- | IAM role ARN for Web Identity (IRSA) |
| Option | Description |
|---|---|
project |
GCP project ID (optional if using ADC or credentials include project) |
endpoint |
Custom endpoint (useful for emulators like fake-gcs-server) |
| Option | Description |
|---|---|
storageAccount |
Azure storage account name (required) |
container |
Container name (optional, uses bucket if omitted) |
endpoint |
Custom endpoint (useful for Azurite emulator) |
Emulator Support
GCS and Azure support custom endpoints for local testing with emulators (fake-gcs-server, Azurite). For self-signed certificates, include the CA certificate in the credentials Secret.
Retention Policy¶
Automatically clean up old backups from object storage.
Performance Tuning¶
Tune multipart upload settings for large datasets or specific network conditions.
| Parameter | Default | Description |
|---|---|---|
partSize |
10MB |
Size of each upload chunk. Increase for high-bandwidth networks. |
concurrency |
3 |
Parallel uploads. Increase for throughput, decrease for memory constraints. |
Pre-Upgrade Snapshots¶
Ensure safety during upgrades by taking a snapshot immediately before the rolling update or blue/green deployment begins.
Operations¶
Check Status:
Trigger Manual Backup: