Run the destructive restore step only after the source snapshot, target health, and shared seal root are all known-good.
This runbook restores the target cluster in the validated local DR lane from a source snapshot stored in RustFS. It assumes the source and target already share the same external Transit key and that you are ready to overwrite the target bootstrap state.
This runbook should leave you with
- a fresh source snapshot written to shared RustFS storage
- an
OpenBaoRestoreobject that completes on the target cluster - a restored target that unseals with the shared Transit key
- post-restore proof that source credentials and source data replaced the target bootstrap state
This workflow overwrites the target cluster state. Existing auth methods, policies, and data on the target are replaced by the snapshot contents.
Decision matrix
Before you restore
| Requirement | Why it exists | What happens if it is wrong |
|---|---|---|
| Source and target share the same Transit root of trust | Restored data must decrypt under the same external seal key after it lands on the target side. | The restore can complete and the target can still remain sealed. |
| The target cluster already exists and exposes restore auth | The restore Job needs a live target to authenticate against and mutate. | The OpenBaoRestore object will fail before it can apply the snapshot. |
| The snapshot key comes from a fresh successful backup | The runbook is supposed to prove actual source-state transfer, not a stale object lookup. | You may restore the wrong data set and draw the wrong conclusion about the lane. |
| Cutover is still manual | Verification must happen before any client-facing change. | You can move traffic to a target that restored incorrectly or still needs operator attention. |
Reference table
Validated lane defaults
| Value | Default | Purpose |
|---|---|---|
| Source context | k3d-openbao-dr-source | Primary cluster that creates the backup. |
| Target context | k3d-openbao-dr-target | Recovery target cluster. |
| Source namespace | openbaocluster-dr-source | Namespace containing the source cluster. |
| Target namespace | openbaocluster-dr-target | Namespace containing the target cluster. |
| Restore name | openbaocluster-dr-target-restore | OpenBaoRestore name. |
Step 1: Trigger a fresh source backup
Apply
Trigger the source backup
kubectl --context k3d-openbao-dr-source -n openbaocluster-dr-source annotate \
openbaocluster openbaocluster-dr-source \
openbao.org/trigger-backup="$(date -u +%Y-%m-%dT%H:%M:%SZ)" --overwrite
Verify
Wait for the source cluster to finish backing up
kubectl --context k3d-openbao-dr-source -n openbaocluster-dr-source \
get openbaocluster openbaocluster-dr-source \
-o jsonpath='{range .status.conditions[*]}{.type}={.status}{" reason="}{.reason}{"\n"}{end}'
Wait until BackingUp=False again before you capture the snapshot key.
Step 2: Capture the snapshot key
Inspect
Read the backup object key from source status
SNAPSHOT_KEY="$(
kubectl --context k3d-openbao-dr-source -n openbaocluster-dr-source \
get openbaocluster openbaocluster-dr-source \
-o jsonpath='{.status.backup.lastBackupName}'
)"
printf '%s\n' "${SNAPSHOT_KEY}"
Step 3: Apply the restore on the target cluster
Apply
Apply the validated OpenBaoRestore manifest
apiVersion: openbao.org/v1alpha1
kind: OpenBaoRestore
metadata:
name: openbaocluster-dr-target-restore
namespace: openbaocluster-dr-target
spec:
cluster: openbaocluster-dr-target
force: true
image: ghcr.io/dc-tec/openbao-backup:edge
source:
target:
provider: s3
endpoint: "http://rustfs.openbaocluster-dr-target.svc:19000"
bucket: "openbao-dr-backups"
usePathStyle: true
credentialsSecretRef:
name: rustfs-secret
key: "<snapshot-key>"
jwtAuthRole: openbao-operator-restore
Replace <snapshot-key> with the exact value from the previous step before you apply the manifest.
Verify the restore
Verify
Watch the restore object and inspect final status
kubectl --context k3d-openbao-dr-target -n openbaocluster-dr-target \
get openbaorestore openbaocluster-dr-target-restore -w
kubectl --context k3d-openbao-dr-target -n openbaocluster-dr-target \
get openbaorestore openbaocluster-dr-target-restore \
-o jsonpath='{.status.phase}{"\n"}{range .status.conditions[*]}{.type}={.status}{" reason="}{.reason}{"\n"}{end}{.status.snapshotKey}{"\n"}'
The steady-state expectation is phase=Completed, RestoreConfigurationReady=True, and RestoreComplete=True with reason RestoreSucceeded.
Verify
Check the target health endpoint after restore
curl -ksS --resolve bao-dr-target.example.com:11443:127.0.0.1 \
https://bao-dr-target.example.com:11443/v1/sys/health
The restored target should return a normal OpenBao health response and the cluster lineage should now match the source snapshot.
Verify
Verify credential cutover and restored data
curl -ksS -o /tmp/target-login.json -w '%{http_code}\n' \
--resolve bao-dr-target.example.com:11443:127.0.0.1 \
-H 'Content-Type: application/json' \
-d '{"password":"target-demo-password"}' \
https://bao-dr-target.example.com:11443/v1/auth/userpass/login/demo-admin
SOURCE_TOKEN="$(
curl -ksS --resolve bao-dr-target.example.com:11443:127.0.0.1 \
-H 'Content-Type: application/json' \
-d '{"password":"source-demo-password"}' \
https://bao-dr-target.example.com:11443/v1/auth/userpass/login/demo-admin \
| jq -r '.auth.client_token'
)"
curl -ksS --resolve bao-dr-target.example.com:11443:127.0.0.1 \
-H "X-Vault-Token: ${SOURCE_TOKEN}" \
https://bao-dr-target.example.com:11443/v1/secret/data/dr-control
The old target password should fail. The source password should succeed, and the dr-control marker should now show phase1-source.
After the restore
You are reading docs for version 0.1.0. Use the version menu to switch to next or another archived release.
Was this page helpful?
Use Needs work to open a structured GitHub issue for this page. The Yes button only acknowledges the signal locally.