Bootstrap the DR proving ground before you test the restore event itself.
This recipe prepares the validated local disaster-recovery lane: a shared trust-services cluster, a source cluster, and a target cluster, all wired so the restore event will cross the same kinds of trust, storage, and ingress boundaries you expect in a real DR rehearsal.
This recipe should leave you with
- an infra cluster that hosts shared trust services and the shared Transit key
- a healthy source cluster and target cluster with distinct namespaces and external endpoints
- shared RustFS storage available to both clusters for snapshot transfer
- known pre-restore state on both sides so the restore event can be verified later
This bootstrap path matches the local DR lane that was proven end to end on March 16, 2026, including source backup, restore into the target cluster, target unseal, and credential plus data verification after restore.
Decision matrix
What this lane assumes
| Assumption | Why it exists | What breaks if it is wrong |
|---|---|---|
| You can create three k3d-backed contexts | The lane depends on an infra cluster plus separate source and target clusters. | If you collapse everything into one cluster, you stop proving the cluster-boundary part of DR. |
| You have bootstrap automation or manifests for the three-cluster lab | The validated lane was assembled from repeatable infra, gateway, and operator setup rather than ad hoc kubectl edits. | Without repeatable bootstrap artifacts, the restore result is hard to trust or reproduce. |
| Shared Transit and shared storage are available before cluster apply | Source and target clusters both depend on them from the start. | The restore will fail later if these dependencies are improvised after the source cluster is already in use. |
| Cutover remains manual | The bootstrap only prepares the recovery pair; it does not automate failover. | Treating the lane as automatic DR creates false confidence in behavior it does not validate. |
Reference table
Validated lane defaults
| Value | Default | Purpose |
|---|---|---|
| Infra context | k3d-openbao-dr-infra | Shared trust-services cluster. |
| Source context | k3d-openbao-dr-source | Primary cluster that creates the snapshot. |
| Target context | k3d-openbao-dr-target | Recovery target cluster. |
| Source hostname | bao-dr-source.example.com | Source passthrough endpoint. |
| Target hostname | bao-dr-target.example.com | Target passthrough endpoint. |
| Transit endpoint | https://host.k3d.internal | Shared trust-services endpoint. |
| Snapshot bucket | openbao-dr-backups | Shared RustFS bucket. |
| Transit key | openbao-dr-shared-unseal | Shared seal root used by both clusters. |
Step 1: Bootstrap the three-cluster lab
Configure
Run the bootstrap automation or manifests you keep for the validated lane
The validated bootstrap needs to create and wire:
- one infra cluster for shared trust services
- one source cluster
- one target cluster
- the Gateway API experimental bundle in each cluster
- a dedicated passthrough edge in each cluster
- a shared RustFS instance and bucket
- a shared external OpenBao trust-services endpoint in the infra cluster
- one operator install in the source cluster
- one operator install in the target cluster
The exact command is specific to your k3d automation. The lane contract is the resulting topology, not the name of one local helper script.
The validated local proof used the public signed edge images by default:
ghcr.io/dc-tec/openbao-operator:edgeghcr.io/dc-tec/openbao-backup:edge
Step 2: Apply the source and target clusters
Apply
Apply the source and target OpenBaoCluster manifests
kubectl --context <source-context> apply -f source-openbaocluster.yaml
kubectl --context <target-context> apply -f target-openbaocluster.yaml
The source and target manifests must both reference the same Transit endpoint, CA bundle, SNI, and key name. That shared seal root is the invariant the restore event depends on.
Verify the bootstrap
Verify
Check source and target readiness
kubectl --context k3d-openbao-dr-source -n openbaocluster-dr-source \
get openbaocluster openbaocluster-dr-source \
-o jsonpath='{.status.phase}{"\n"}{.status.readyReplicas}{"\n"}{range .status.conditions[*]}{.type}={.status}{" reason="}{.reason}{"\n"}{end}'
kubectl --context k3d-openbao-dr-target -n openbaocluster-dr-target \
get openbaocluster openbaocluster-dr-target \
-o jsonpath='{.status.phase}{"\n"}{.status.readyReplicas}{"\n"}{range .status.conditions[*]}{.type}={.status}{" reason="}{.reason}{"\n"}{end}'
The important steady-state expectation on both sides is phase=Running, readyReplicas=1, Available=True, OpenBaoInitialized=True, and OpenBaoSealed=False.
Verify
Check the source and target health endpoints
curl -ksS --resolve bao-dr-source.example.com:10443:127.0.0.1 \
https://bao-dr-source.example.com:10443/v1/sys/health
curl -ksS --resolve bao-dr-target.example.com:11443:127.0.0.1 \
https://bao-dr-target.example.com:11443/v1/sys/health
Verify
Verify the pre-restore source and target state
SOURCE_TOKEN="$(
curl -ksS --resolve bao-dr-source.example.com:10443:127.0.0.1 \
-H 'Content-Type: application/json' \
-d '{"password":"source-demo-password"}' \
https://bao-dr-source.example.com:10443/v1/auth/userpass/login/demo-admin \
| jq -r '.auth.client_token'
)"
TARGET_TOKEN="$(
curl -ksS --resolve bao-dr-target.example.com:11443:127.0.0.1 \
-H 'Content-Type: application/json' \
-d '{"password":"target-demo-password"}' \
https://bao-dr-target.example.com:11443/v1/auth/userpass/login/demo-admin \
| jq -r '.auth.client_token'
)"
curl -ksS --resolve bao-dr-source.example.com:10443:127.0.0.1 \
-H "X-Vault-Token: ${SOURCE_TOKEN}" \
https://bao-dr-source.example.com:10443/v1/secret/data/dr-control
curl -ksS --resolve bao-dr-target.example.com:11443:127.0.0.1 \
-H "X-Vault-Token: ${TARGET_TOKEN}" \
https://bao-dr-target.example.com:11443/v1/secret/data/dr-control
The validated lane starts with phase1-source on the source side and phase1-target on the target side so the restore event can prove that target state was really replaced.
Continue the DR rehearsal
You are reading docs for version 0.1.0. Use the version menu to switch to next or another archived release.
Was this page helpful?
Use Needs work to open a structured GitHub issue for this page. The Yes button only acknowledges the signal locally.