Skip to main content
Version: 0.1.0-rc.5

Decision matrix

Choose the upgrade strategy

Choose the upgrade strategy.
StrategyUse it whenOperator behaviorWatch for
BlueGreenYou need staged validation, manual promotion, or stronger cutover boundaries for a risky change.The operator creates a parallel Green revision, validates it, promotes it, and then cleans up the old Blue revision.This path uses more cluster resources and introduces additional in-flight phases to watch.

Diagram

Upgrade control flow

Every upgrade starts with validation. After that, the controller either executes a partitioned rolling rollout or creates a parallel Green revision for promotion and cleanup.

Before you patch spec.version

  • Confirm the cluster is initialized, healthy, and already safe to change. An upgrade is not the time to discover a broken backup path or an unstable seal configuration.
  • Set spec.version to the target semantic version. The operator blocks downgrades and validates semver format.
  • If you override spec.image, keep it aligned with spec.version. Semantic-version tags must match, and digest-pinned images still require spec.version as the authoritative intent.
  • Make sure the upgrade executor Job can authenticate:
    • with the default JWT role created from selfInit.oidc, or
    • with an explicit spec.upgrade.jwtAuthRole
  • If you want pre-upgrade snapshots, configure spec.backup first and make sure the backup auth path is already working.
  • In the Hardened profile, explicitly allow egress to object storage or other external dependencies the upgrade path needs.
Upgrade auth is separate from backup auth

The upgrade executor Job uses JWT auth to talk to OpenBao. Pre-upgrade snapshots use the backup configuration and backup identity path. Do not assume one automatically covers the other.

Use the default rolling path

Use RollingUpdate when you want the lowest operational complexity and you do not need a second revision running in parallel.

Configure

Use the default rolling path with OIDC bootstrap

yaml

spec:
version: "2.5.0"
selfInit:
enabled: true
oidc:
enabled: true
upgrade:
strategy: RollingUpdate

Apply

Patch the cluster to the target version

bash

kubectl patch openbaocluster <name> -n <namespace> --type merge -p '{
"spec": {
"version": "2.5.0",
"upgrade": {
"strategy": "RollingUpdate"
}
}
}'
Rolling upgrades can step down more than once

If leadership moves while the rollout is in progress, you may see multiple step-down Jobs across the same upgrade. That is expected and does not mean the controller restarted the entire workflow.

Use blue-green when you need a controlled cutover

Choose BlueGreen when you need parallel validation, a manual promotion point, or stronger rollback boundaries before the new revision takes over production traffic.

Configure

Configure a blue-green upgrade with automatic rollback

yaml

spec:
version: "2.5.0"
upgrade:
strategy: BlueGreen
preUpgradeSnapshot: true
blueGreen:
autoPromote: true
autoRollback:
enabled: true
onJobFailure: true
onValidationFailure: true

Configure

Add a pre-promotion verification hook

yaml

spec:
upgrade:
strategy: BlueGreen
blueGreen:
verification:
prePromotionHook:
image: curlimages/curl
command: ["curl", "-f", "https://green-cluster:8200/v1/sys/health"]

Use the hook to prove the Green revision is really healthy before promotion. If the hook fails, the operator either holds or rolls back depending on the autoRollback settings.

Reference table

Control in-flight upgrades

Control in-flight upgrades.
ControlWhat it doesUse it when
spec.upgrade.requests.promoteApproves promotion when blueGreen.autoPromote=false and the Green revision is already healthy.You want a manual checkpoint before switching traffic.
blueGreen.autoRollbackAborts or rolls back automatically when validation or execution fails in supported phases.You want the operator to recover from bad Green revisions without waiting for a human to react.

Verify the rollout outcome

Verify

Inspect the upgrade result

bash

kubectl get openbaocluster <name> -n <namespace> -o yaml
kubectl get pods -n <namespace>
kubectl get jobs -n <namespace>

Look for an idle cluster rather than just a patched spec. The right end state is healthy pods, no unresolved upgrade error reason, and a condition surface that matches the cluster features you enabled.

Reference table

What good looks like after the upgrade

What good looks like after the upgrade.
SurfaceHealthy signalWhy it matters
AvailabilityAvailable=True and the workload Pods are Ready.The new revision is actually serving instead of only existing on paper.
Upgrade statusNo unresolved status.upgrade.lastErrorReason and no stalled blue-green phase.The controller does not think operator action is still required.
Protection pathBackup status and external dependency conditions remain healthy.A successful version change should not quietly break the next restore or backup window.

Keep the change safe

Prerelease documentation

This version tracks a prerelease build. Features and behavior may change before the next stable release.

Was this page helpful?

Use Needs work to open a structured GitHub issue for this page. The Yes button only acknowledges the signal locally.