Kubernetes makes mounting secrets easy; it does not make secret lifecycle easy. Teams copy base64 blobs between namespaces, fork Helm charts with embedded credentials, or lean on long-lived tokens because rotation scripts are brittle. The result is sprawl: nobody knows which secret backs which workload, and revocation becomes a archaeology project.
Anchor on a single external source of truth—cloud KMS with CSI drivers, HashiCorp Vault, or your cloud provider’s secret manager—and treat cluster Secrets as ephemeral projections. Namespace-scoped access via RBAC and IAM-bound identities beats shared cluster-admin kubeconfigs.
Rotation needs automation with backoff and health checks. Prefer short-lived credentials (OIDC workload identity, dynamic database passwords) over static files. When you must store symmetric keys, document owners, TTLs, and break-glass procedures.
GitOps complicates the story in a good way: desired state stays in Git while secret references point to external IDs. Never commit plaintext; use sealed secrets or external-secret operators with tight scopes.
Audit readiness: maintain an inventory keyed by workload, classify data handled, and test restore paths quarterly. Exercises reveal whether your backups of secret metadata are as good as the secrets themselves.
Platform teams should publish golden patterns—one chart for mounting AWS Secrets Manager, one for Vault Agent sidecars—and measure adoption instead of mandating bespoke YAML per service.
Related: DevOps consulting and how we staff platform work.
