Reproduce the validated local development lane without turning a quick-start cluster into a pile of one-off overrides.
This recipe stands up the local development baseline with tenant onboarding, operator-managed TLS, a shared terminating edge, JWT bootstrap, an optional demo login, and an S3-compatible backup path backed by RustFS.
This recipe should leave you with
- an onboarded tenant namespace and admin ServiceAccount
- a Development-profile cluster that self-initializes and exposes the UI through the shared edge
- JWT admin login working for a real ServiceAccount token
- a RustFS-backed backup configuration you can verify before the first upgrade rehearsal
This recipe follows the development lifecycle, backup, and blue/green patterns exercised in the local validation environment and the in-repo E2E suites. The optional userpass login remains local-only convenience layered on top of the validated cluster path.
Use tenant onboarding, external access, and backup operations when you need the generic operator behavior. This recipe only captures the exact validated local lane.
Decision matrix
What this lane assumes
| Assumption | Why it exists | What breaks if it is wrong |
|---|---|---|
| Multi-tenant operator install with admission enabled | The validated path starts from the default tenant-onboarding model. | Namespace provisioning and generated RBAC will drift from the lane you are trying to reproduce. |
| A shared terminating Gateway already exists | The lane intentionally reuses one local edge for OpenBao and the rest of the toolchain. | You will not validate the same routing contract if you fall back to port-forwarding or user-managed passthrough. |
| RustFS is reachable as an S3-compatible endpoint | Backups are part of the lane, not an optional afterthought. | The cluster may look healthy while the part of the lane that matters for restore rehearsal never actually works. |
| Demo login stays local-only | The userpass bootstrap is included only to make UI validation and demos faster. | Reusing it outside a disposable environment turns a convenience into a security mistake. |
Reference table
Inputs to replace before apply
| Placeholder | Example | Purpose |
|---|---|---|
<namespace> | openbaocluster-demo | Tenant namespace for the cluster. |
<cluster-name> | openbaocluster-demo | OpenBaoCluster name. |
<openbao-version> | 2.5.1 | OpenBao version. |
<gateway-name> | shared-gateway | Existing terminating Gateway used by the local toolchain. |
<gateway-namespace> | default | Namespace of the Gateway. |
<external-host> | bao-demo.example.com | External hostname for the shared-edge route. |
<operator-namespace> | openbao-operator-system | Namespace that hosts the central OpenBaoTenant resource. |
Step 1: Onboard the tenant namespace
Apply
Create the namespace, onboarding request, and admin ServiceAccount
apiVersion: v1
kind: Namespace
metadata:
name: <namespace>
labels:
openbao.org/tenant: "true"
---
apiVersion: openbao.org/v1alpha1
kind: OpenBaoTenant
metadata:
name: <cluster-name>-tenant
namespace: <operator-namespace>
spec:
targetNamespace: <namespace>
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: openbao-admin
namespace: <namespace>
Verify
Wait for tenant provisioning
kubectl -n <operator-namespace> describe openbaotenant <cluster-name>-tenant
The steady-state expectation is Provisioned=True. Do not move to the cluster manifest until the namespace is actually prepared for the operator.
Step 2: Create the backup credentials Secret
Apply
Create the RustFS credentials Secret
kubectl -n <namespace> create secret generic rustfs-secret \
--from-literal=accessKeyId='rustfsadmin' \
--from-literal=secretAccessKey='rustfsadmin'
If your local RustFS instance uses different credentials, replace both values here and in the object-storage service itself.
Step 3: Apply the validated development cluster manifest
Apply
Apply the Development-profile cluster
apiVersion: openbao.org/v1alpha1
kind: OpenBaoCluster
metadata:
name: <cluster-name>
namespace: <namespace>
spec:
profile: Development
replicas: 3
version: "<openbao-version>"
tls:
enabled: true
mode: OperatorManaged
rotationPeriod: "720h"
configuration:
logLevel: "info"
ui: true
logging:
format: "json"
defaultLeaseTTL: "720h"
maxLeaseTTL: "8760h"
cacheSize: 134217728
disableCache: false
raft:
performanceMultiplier: 2
storage:
size: "10Gi"
deletionPolicy: DeleteAll
selfInit:
enabled: true
oidc:
enabled: true
requests:
- name: enable-userpass-auth
operation: update
path: sys/auth/userpass
authMethod:
type: userpass
- name: enable-jwt-auth
operation: update
path: sys/auth/jwt
authMethod:
type: jwt
- name: enable-demo-kv
operation: update
path: sys/mounts/secret
secretEngine:
type: kv
description: "Demo KV v2 engine"
options:
version: "2"
- name: create-admin-policy
operation: update
path: sys/policies/acl/admin
policy:
policy: |
path "*" {
capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}
- name: create-demo-ui-user
operation: update
path: auth/userpass/users/demo-admin
data:
password: "demo-password"
token_policies:
- admin
- name: create-admin-jwt-role
operation: update
path: auth/jwt/role/admin
data:
role_type: jwt
user_claim: sub
bound_audiences:
- openbao-internal
bound_subject: system:serviceaccount:<namespace>:openbao-admin
token_policies:
- admin
policies:
- admin
ttl: 1h
gateway:
enabled: true
listenerName: websecure
gatewayRef:
name: <gateway-name>
namespace: <gateway-namespace>
hostname: "<external-host>"
backendTLS:
enabled: true
tlsPassthrough: false
path: /
backup:
schedule: "*/30 * * * *"
target:
provider: s3
endpoint: "http://rustfs-svc.rustfs.svc.cluster.local:9000"
bucket: "openbao-backups"
pathPrefix: "clusters/<cluster-name>"
usePathStyle: true
credentialsSecretRef:
name: rustfs-secret
retention:
maxCount: 7
maxAge: "168h"
upgrade:
preUpgradeSnapshot: true
strategy: BlueGreen
The demo-admin user exists only to make local validation easy. Keep it out of any shared or long-lived environment.
If kubelet rejects the Pods because AppArmor is unavailable, add:
spec:
workloadHardening:
appArmorEnabled: false
Verify the lane
Verify
Check the cluster conditions
kubectl -n <namespace> get openbaocluster <cluster-name> \
-o jsonpath='{range .status.conditions[*]}{.type}={.status}{" reason="}{.reason}{"\n"}{end}'
The steady-state expectation is Available=True, TLSReady=True, UserAccessBootstrap=True, BackupConfigurationReady=True, GatewayIntegrationReady=True, OpenBaoInitialized=True, and OpenBaoSealed=False.
Verify
Confirm the cluster did not persist a root token Secret
kubectl -n <namespace> get secret <cluster-name>-root-token
This should return NotFound. A self-init lane should not leave the root token stored as a Kubernetes Secret.
Verify
Verify the demo UI login through the local service
kubectl -n <namespace> port-forward svc/<cluster-name> 8200:8200
export VAULT_ADDR="https://127.0.0.1:8200"
curl -sS -k \
-H 'Content-Type: application/json' \
-d '{"password":"demo-password"}' \
${VAULT_ADDR%/}/v1/auth/userpass/login/demo-admin
Local browsers and CLIs may warn about the operator-managed CA. That is expected in this lane.
Verify
Verify JWT admin login
JWT="$(kubectl -n <namespace> create token openbao-admin --audience openbao-internal --duration=1h)"
curl -sS -k \
-H 'Content-Type: application/json' \
-d "{\"role\":\"admin\",\"jwt\":\"${JWT}\"}" \
${VAULT_ADDR%/}/v1/auth/jwt/login
Verify
Trigger and inspect a manual backup
kubectl -n <namespace> annotate openbaocluster <cluster-name> \
openbao.org/trigger-backup="$(date -u +%Y-%m-%dT%H:%M:%SZ)" --overwrite
kubectl -n <namespace> get openbaocluster <cluster-name> \
-o jsonpath='{.status.backup.lastBackupName}{"\n"}{.status.backup.lastBackupTime}{"\n"}{.status.backup.lastFailureReason}{"\n"}{.status.backup.lastFailureMessage}{"\n"}'
A successful lane should produce a backup object key and no failure reason or failure message.
Keep moving
You are reading docs for version 0.1.0. Use the version menu to switch to next or another archived release.
Was this page helpful?
Use Needs work to open a structured GitHub issue for this page. The Yes button only acknowledges the signal locally.