Skip to main content
Version: next

This recipe should leave you with

  • an onboarded tenant namespace and admin ServiceAccount
  • a Development-profile cluster that self-initializes and exposes the UI through the shared edge
  • JWT admin login working for a real ServiceAccount token
  • a RustFS-backed backup configuration you can verify before the first upgrade rehearsal
Validated lane

This recipe follows the development lifecycle, backup, and blue/green patterns exercised in the local validation environment and the in-repo E2E suites. The optional userpass login remains local-only convenience layered on top of the validated cluster path.

Use the main docs for the product-wide source of truth

Use tenant onboarding, external access, and backup operations when you need the generic operator behavior. This recipe only captures the exact validated local lane.

Decision matrix

What this lane assumes

What this lane assumes.
AssumptionWhy it existsWhat breaks if it is wrong
A shared terminating Gateway already existsThe lane intentionally reuses one local edge for OpenBao and the rest of the toolchain.You will not validate the same routing contract if you fall back to port-forwarding or user-managed passthrough.
RustFS is reachable as an S3-compatible endpointBackups are part of the lane, not an optional afterthought.The cluster may look healthy while the part of the lane that matters for restore rehearsal never actually works.
Demo login stays local-onlyThe userpass bootstrap is included only to make UI validation and demos faster.Reusing it outside a disposable environment turns a convenience into a security mistake.

Reference table

Inputs to replace before apply

Inputs to replace before apply.
PlaceholderExamplePurpose
<namespace>openbaocluster-demoTenant namespace for the cluster.
<cluster-name>openbaocluster-demoOpenBaoCluster name.
<openbao-version>2.5.1OpenBao version.
<gateway-name>shared-gatewayExisting terminating Gateway used by the local toolchain.
<gateway-namespace>defaultNamespace of the Gateway.
<external-host>bao-demo.example.comExternal hostname for the shared-edge route.
<operator-namespace>openbao-operator-systemNamespace that hosts the central OpenBaoTenant resource.

Step 1: Onboard the tenant namespace

Apply

Create the namespace, onboarding request, and admin ServiceAccount

yaml

apiVersion: v1
kind: Namespace
metadata:
name: <namespace>
labels:
openbao.org/tenant: "true"
---
apiVersion: openbao.org/v1alpha1
kind: OpenBaoTenant
metadata:
name: <cluster-name>-tenant
namespace: <operator-namespace>
spec:
targetNamespace: <namespace>
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: openbao-admin
namespace: <namespace>

Verify

Wait for tenant provisioning

bash

kubectl -n <operator-namespace> describe openbaotenant <cluster-name>-tenant

The steady-state expectation is Provisioned=True. Do not move to the cluster manifest until the namespace is actually prepared for the operator.

Step 2: Create the backup credentials Secret

Apply

Create the RustFS credentials Secret

bash

kubectl -n <namespace> create secret generic rustfs-secret \
--from-literal=accessKeyId='rustfsadmin' \
--from-literal=secretAccessKey='rustfsadmin'

If your local RustFS instance uses different credentials, replace both values here and in the object-storage service itself.

Step 3: Apply the validated development cluster manifest

Apply

Apply the Development-profile cluster

yaml

apiVersion: openbao.org/v1alpha1
kind: OpenBaoCluster
metadata:
name: <cluster-name>
namespace: <namespace>
spec:
profile: Development
replicas: 3
version: "<openbao-version>"

tls:
enabled: true
mode: OperatorManaged
rotationPeriod: "720h"

configuration:
logLevel: "info"
ui: true
logging:
format: "json"
defaultLeaseTTL: "720h"
maxLeaseTTL: "8760h"
cacheSize: 134217728
disableCache: false
raft:
performanceMultiplier: 2

storage:
size: "10Gi"
deletionPolicy: DeleteAll

selfInit:
enabled: true
oidc:
enabled: true
requests:
- name: enable-userpass-auth
operation: update
path: sys/auth/userpass
authMethod:
type: userpass
- name: enable-jwt-auth
operation: update
path: sys/auth/jwt
authMethod:
type: jwt
- name: enable-demo-kv
operation: update
path: sys/mounts/secret
secretEngine:
type: kv
description: "Demo KV v2 engine"
options:
version: "2"
- name: create-admin-policy
operation: update
path: sys/policies/acl/admin
policy:
policy: |
path "*" {
capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}
- name: create-demo-ui-user
operation: update
path: auth/userpass/users/demo-admin
data:
password: "demo-password"
token_policies:
- admin
- name: create-admin-jwt-role
operation: update
path: auth/jwt/role/admin
data:
role_type: jwt
user_claim: sub
bound_audiences:
- openbao-internal
bound_subject: system:serviceaccount:<namespace>:openbao-admin
token_policies:
- admin
policies:
- admin
ttl: 1h

gateway:
enabled: true
listenerName: websecure
gatewayRef:
name: <gateway-name>
namespace: <gateway-namespace>
hostname: "<external-host>"
backendTLS:
enabled: true
tlsPassthrough: false
path: /

backup:
schedule: "*/30 * * * *"
target:
provider: s3
endpoint: "http://rustfs-svc.rustfs.svc.cluster.local:9000"
bucket: "openbao-backups"
pathPrefix: "clusters/<cluster-name>"
usePathStyle: true
credentialsSecretRef:
name: rustfs-secret
retention:
maxCount: 7
maxAge: "168h"

upgrade:
preUpgradeSnapshot: true
strategy: BlueGreen
Demo-only credentials

The demo-admin user exists only to make local validation easy. Keep it out of any shared or long-lived environment.

AppArmor on local clusters

If kubelet rejects the Pods because AppArmor is unavailable, add:

spec:
workloadHardening:
appArmorEnabled: false

Verify the lane

Verify

Check the cluster conditions

bash

kubectl -n <namespace> get openbaocluster <cluster-name> \
-o jsonpath='{range .status.conditions[*]}{.type}={.status}{" reason="}{.reason}{"\n"}{end}'

The steady-state expectation is Available=True, TLSReady=True, UserAccessBootstrap=True, BackupConfigurationReady=True, GatewayIntegrationReady=True, OpenBaoInitialized=True, and OpenBaoSealed=False.

Verify

Confirm the cluster did not persist a root token Secret

bash

kubectl -n <namespace> get secret <cluster-name>-root-token

This should return NotFound. A self-init lane should not leave the root token stored as a Kubernetes Secret.

Verify

Verify the demo UI login through the local service

bash

kubectl -n <namespace> port-forward svc/<cluster-name> 8200:8200
export VAULT_ADDR="https://127.0.0.1:8200"

curl -sS -k \
-H 'Content-Type: application/json' \
-d '{"password":"demo-password"}' \
${VAULT_ADDR%/}/v1/auth/userpass/login/demo-admin

Local browsers and CLIs may warn about the operator-managed CA. That is expected in this lane.

Verify

Verify JWT admin login

bash

JWT="$(kubectl -n <namespace> create token openbao-admin --audience openbao-internal --duration=1h)"

curl -sS -k \
-H 'Content-Type: application/json' \
-d "{\"role\":\"admin\",\"jwt\":\"${JWT}\"}" \
${VAULT_ADDR%/}/v1/auth/jwt/login

Verify

Trigger and inspect a manual backup

bash

kubectl -n <namespace> annotate openbaocluster <cluster-name> \
openbao.org/trigger-backup="$(date -u +%Y-%m-%dT%H:%M:%SZ)" --overwrite

kubectl -n <namespace> get openbaocluster <cluster-name> \
-o jsonpath='{.status.backup.lastBackupName}{"\n"}{.status.backup.lastBackupTime}{"\n"}{.status.backup.lastFailureReason}{"\n"}{.status.backup.lastFailureMessage}{"\n"}'

A successful lane should produce a backup object key and no failure reason or failure message.

Keep moving

Next release documentation

You are reading the unreleased main docs. Use the version menu for the newest published release, or check the release notes for what is already out.

Was this page helpful?

Use Needs work to open a structured GitHub issue for this page. The Yes button only acknowledges the signal locally.