Upgrading to 0.15.x
This guide explains how to upgrade from 0.14.x to 0.15.x.
Before you begin
Section titled “Before you begin”Before you begin the upgrade process, ensure that you have a backup of your data.
The upgrade process will not touch the actual backup data, but might change the resource definitions.
Breaking changes
Section titled “Breaking changes”Minimum Kubernetes version
Section titled “Minimum Kubernetes version”The minimum required version of Kubernetes has been increased from 1.28 to 1.30.
Make sure your cluster is running Kubernetes 1.30 or later before upgrading.
Restore filters
Section titled “Restore filters”The flat restoreFromDateTime and restoreUntilDateTime fields on the Restore CRD are now deprecated
in favor of the new nested config.filters structure.
The old fields are still supported as a fallback,
but we recommend migrating to the new structure.
See the Restore filters documentation for more details.
Checklist
Section titled “Checklist”Follow the steps in this checklist to upgrade to 0.15.x.
Verify Kubernetes version
Section titled “Verify Kubernetes version”Ensure your cluster is running Kubernetes 1.30 or later:
$ kubectl versionPause backups
Section titled “Pause backups”Pause all backups to prevent any changes to the backup data by setting the .spec.enabled field to false.
#!/usr/bin/shfor backup_name in $(kubectl get backups -n [NAMESPACE] -o custom-columns='NAME:.metadata.name' --no-headers)dokubectl patch --type merge backup -n [NAMESPACE] "$backup_name" -p '{"spec":{"enabled": false}}'done#!/usr/bin/fishfor backup_name in (kubectl get backups -n [NAMESPACE] -o custom-columns='NAME:.metadata.name' --no-headers)kubectl patch --type merge backup -n [NAMESPACE] $backup_name -p '{"spec":{"enabled": false}}'endReplace [NAMESPACE] with the namespace where the backups are located.
Pause existing operator
Section titled “Pause existing operator”Pause the existing operator by setting the .spec.replicas field to 0.
$ kubectl scale deployment kannika-operator -n kannika-system --replicas=0This will ensure no changes are made to the existing resources while the upgrade is in progress.
If you are using a Helm chart to manage the operator,
you can set the operator.replicaCount field to 0 in the Helm values file:
operator: replicaCount: 0Upgrade CRDs
Section titled “Upgrade CRDs”Install the new Custom Resource Definitions (CRDs) for 0.15.x.:
Using kubectl
Section titled “Using kubectl”$ kubectl apply -f https://docs.kannika.io/refs/0.15.0/crd/kannika-crd-v1alpha.ymlUsing Helm
Section titled “Using Helm”$ helm install kannika-crd oci://quay.io/kannika/charts/kannika-crd \ --version 0.15.0Install application with updated Helm values
Section titled “Install application with updated Helm values”This release adds new optional Helm values for global logging configuration. See Logging configuration for details.
Install the new version of Kannika Armory
Section titled “Install the new version of Kannika Armory”Install the new version of Kannika Armory using Helm:
$ helm upgrade --install kannika oci://quay.io/kannika/charts/kannika \ --create-namespace \ --namespace kannika-system \ --version 0.15.0 \ --values values.yamlEnable backups again
Section titled “Enable backups again”Once you have completed the upgrade process,
enable backups again by setting the .spec.enabled field to true.
#!/usr/bin/shfor backup_name in $(kubectl get backups -n [NAMESPACE] -o custom-columns='NAME:.metadata.name' --no-headers)dokubectl patch --type merge backup -n [NAMESPACE] "$backup_name" -p '{"spec":{"enabled": true}}'done#!/usr/bin/fishfor backup_name in (kubectl get backups -n [NAMESPACE] -o custom-columns='NAME:.metadata.name' --no-headers)kubectl patch --type merge backup -n [NAMESPACE] $backup_name -p '{"spec":{"enabled": true}}'endReplace [NAMESPACE] with the namespace where the backups are located.
Verify the installation
Section titled “Verify the installation”Verify that the upgrade was successful by checking the logs of the Kannika Armory components:
$ kubectl logs -n kannika-system -l app.kubernetes.io/name=kannika-operator # or operator$ kubectl logs -n kannika-system -l app.kubernetes.io/name=kannika-api # or api$ kubectl logs -n kannika-system -l app.kubernetes.io/name=kannika-console # or consoleVerify that the backups are running as expected:
$ kubectl get backups -n [NAMESPACE]$ kubectl get schemaregistrybackups -n [NAMESPACE]If you encounter any issues during the upgrade process, do not hesitate to contact us on Slack.