This guide explains how to upgrade from 0.10.x to 0.11.x.
Before you begin
Before you begin the upgrade process,
ensure that you have a backup of your data.
The upgrade process will not touch the actual backup data,
but might change the resource definitions and remove the Deployments of backups,
the operator, and other components.
Checklist
Follow the steps in this checklist to upgrade to 0.11.x.
Pause backups
Pause all backups to prevent any changes to the backup data by setting the .spec.enabled
field to false
.
for backup_name in $( kubectl get backups -n [NAMESPACE] -o custom-columns = ' NAME:.metadata.name ' --no-headers )
kubectl patch --type merge backup -n [NAMESPACE] " $backup_name " -p ' {"spec":{"enabled": false}} '
for backup_name in ( kubectl get backups -n [NAMESPACE] -o custom-columns = ' NAME:.metadata.name ' --no-headers )
kubectl patch --type merge backup -n [NAMESPACE] $backup_name -p ' {"spec":{"enabled": false}} '
Replace [NAMESPACE]
with the namespace where the backups are located.
Pause existing operator
Pause the existing operator by setting the .spec.replicas
field to 0
.
$ kubectl scale deployment kannika-operator -n kannika-system --replicas=0
This will ensure no changes are made to the existing resources while the upgrade is in progress.
If you are using a Helm chart to manage the operator,
you can set the operator.replicaCount
field to 0
in the Helm values file:
Upgrade CRDs
Install the new Custom Resource Definitions (CRDs) for 0.11.x.:
Using kubectl
$ kubectl apply -f https://docs.kannika.io/refs/0.11.0/crd/kannika-crd-v1alpha.yml
Using Helm
$ helm install kannika-crd oci://quay.io/kannika/charts/kannika-crd \
Upgrade application
Upgrade to the new version of Kannika Armory using Helm:
$ helm upgrade --install kannika oci://quay.io/kannika/charts/kannika \
--namespace kannika-system \
If you are disabled the operator using Helm,
make sure to reset operator.replicaCount
field to 1
:
Enable backups again
Once you have completed the upgrade process,
enable backups again by setting the .spec.enabled
field to true
.
for backup_name in $( kubectl get backups -n [NAMESPACE] -o custom-columns = ' NAME:.metadata.name ' --no-headers )
kubectl patch --type merge backup -n [NAMESPACE] " $backup_name " -p ' {"spec":{"enabled": true}} '
for backup_name in ( kubectl get backups -n [NAMESPACE] -o custom-columns = ' NAME:.metadata.name ' --no-headers )
kubectl patch --type merge backup -n [NAMESPACE] $backup_name -p ' {"spec":{"enabled": true}} '
Replace [NAMESPACE]
with the namespace where the backups are located.
Note
In this version, Backups’ Deployments have been replaced by StatefulSets in order to support the new ‘worker groups’ feature .
The operator should remove the old Deployments automatically after upgrading,
but if you notice some of them still remain then it is safe to delete them manually.
Verify the installation
Verify that the upgrade was successful by checking the logs of the Kannika Armory components:
$ kubectl logs -n kannika-system -l app.kubernetes.io/name=operator # or kannika-operator
$ kubectl logs -n kannika-system -l app.kubernetes.io/name=api # or kannika-api
$ kubectl logs -n kannika-system -l app.kubernetes.io/name=console # or kannika-console
Verify that the backups are running as expected:
$ kubectl get backups -n [NAMESPACE]
$ kubectl get schemaregistrybackups -n [NAMESPACE]
If you encounter any issues during the upgrade process,
do not hesitate to contact us on Slack .