Skip to main content
Version: next

This runbook should leave you with

  • a fresh source snapshot written to shared RustFS storage
  • an OpenBaoRestore object that completes on the target cluster
  • a restored target that unseals with the shared Transit key
  • post-restore proof that source credentials and source data replaced the target bootstrap state
Destructive operation

This workflow overwrites the target cluster state. Existing auth methods, policies, and data on the target are replaced by the snapshot contents.

Decision matrix

Before you restore

Before you restore.
RequirementWhy it existsWhat happens if it is wrong
The target cluster already exists and exposes restore authThe restore Job needs a live target to authenticate against and mutate.The OpenBaoRestore object will fail before it can apply the snapshot.
The snapshot key comes from a fresh successful backupThe runbook is supposed to prove actual source-state transfer, not a stale object lookup.You may restore the wrong data set and draw the wrong conclusion about the lane.
Cutover is still manualVerification must happen before any client-facing change.You can move traffic to a target that restored incorrectly or still needs operator attention.

Reference table

Validated lane defaults

Validated lane defaults.
ValueDefaultPurpose
Source contextk3d-openbao-dr-sourcePrimary cluster that creates the backup.
Target contextk3d-openbao-dr-targetRecovery target cluster.
Source namespaceopenbaocluster-dr-sourceNamespace containing the source cluster.
Target namespaceopenbaocluster-dr-targetNamespace containing the target cluster.
Restore nameopenbaocluster-dr-target-restoreOpenBaoRestore name.

Step 1: Trigger a fresh source backup

Apply

Trigger the source backup

bash

kubectl --context k3d-openbao-dr-source -n openbaocluster-dr-source annotate \
openbaocluster openbaocluster-dr-source \
openbao.org/trigger-backup="$(date -u +%Y-%m-%dT%H:%M:%SZ)" --overwrite

Verify

Wait for the source cluster to finish backing up

bash

kubectl --context k3d-openbao-dr-source -n openbaocluster-dr-source \
get openbaocluster openbaocluster-dr-source \
-o jsonpath='{range .status.conditions[*]}{.type}={.status}{" reason="}{.reason}{"\n"}{end}'

Wait until BackingUp=False again before you capture the snapshot key.

Step 2: Capture the snapshot key

Inspect

Read the backup object key from source status

bash

SNAPSHOT_KEY="$(
kubectl --context k3d-openbao-dr-source -n openbaocluster-dr-source \
get openbaocluster openbaocluster-dr-source \
-o jsonpath='{.status.backup.lastBackupName}'
)"

printf '%s\n' "${SNAPSHOT_KEY}"

Step 3: Apply the restore on the target cluster

Apply

Apply the validated OpenBaoRestore manifest

yaml

apiVersion: openbao.org/v1alpha1
kind: OpenBaoRestore
metadata:
name: openbaocluster-dr-target-restore
namespace: openbaocluster-dr-target
spec:
cluster: openbaocluster-dr-target
force: true
image: ghcr.io/dc-tec/openbao-backup:edge
source:
target:
provider: s3
endpoint: "http://rustfs.openbaocluster-dr-target.svc:19000"
bucket: "openbao-dr-backups"
usePathStyle: true
credentialsSecretRef:
name: rustfs-secret
key: "<snapshot-key>"
jwtAuthRole: openbao-operator-restore

Replace <snapshot-key> with the exact value from the previous step before you apply the manifest.

Verify the restore

Verify

Watch the restore object and inspect final status

bash

kubectl --context k3d-openbao-dr-target -n openbaocluster-dr-target \
get openbaorestore openbaocluster-dr-target-restore -w

kubectl --context k3d-openbao-dr-target -n openbaocluster-dr-target \
get openbaorestore openbaocluster-dr-target-restore \
-o jsonpath='{.status.phase}{"\n"}{range .status.conditions[*]}{.type}={.status}{" reason="}{.reason}{"\n"}{end}{.status.snapshotKey}{"\n"}'

The steady-state expectation is phase=Completed, RestoreConfigurationReady=True, and RestoreComplete=True with reason RestoreSucceeded.

Verify

Check the target health endpoint after restore

bash

curl -ksS --resolve bao-dr-target.example.com:11443:127.0.0.1 \
https://bao-dr-target.example.com:11443/v1/sys/health

The restored target should return a normal OpenBao health response and the cluster lineage should now match the source snapshot.

Verify

Verify credential cutover and restored data

bash

curl -ksS -o /tmp/target-login.json -w '%{http_code}\n' \
--resolve bao-dr-target.example.com:11443:127.0.0.1 \
-H 'Content-Type: application/json' \
-d '{"password":"target-demo-password"}' \
https://bao-dr-target.example.com:11443/v1/auth/userpass/login/demo-admin

SOURCE_TOKEN="$(
curl -ksS --resolve bao-dr-target.example.com:11443:127.0.0.1 \
-H 'Content-Type: application/json' \
-d '{"password":"source-demo-password"}' \
https://bao-dr-target.example.com:11443/v1/auth/userpass/login/demo-admin \
| jq -r '.auth.client_token'
)"

curl -ksS --resolve bao-dr-target.example.com:11443:127.0.0.1 \
-H "X-Vault-Token: ${SOURCE_TOKEN}" \
https://bao-dr-target.example.com:11443/v1/secret/data/dr-control

The old target password should fail. The source password should succeed, and the dr-control marker should now show phase1-source.

After the restore

Next release documentation

You are reading the unreleased main docs. Use the version menu for the newest published release, or check the release notes for what is already out.

Was this page helpful?

Use Needs work to open a structured GitHub issue for this page. The Yes button only acknowledges the signal locally.