Skip to content

Upgrading to 0.10.x

This guide explains how to upgrade from 0.9.x to 0.10.x.

Before you begin the upgrade process, ensure that you have a backup of your data.

The upgrade process will not touch the actual backup data, but might change the resource definitions and recreate the Deployments of backups, the operator, and other components.

Follow the steps in this checklist to upgrade to 0.10.x.

Pause all backups to prevent any changes to the backup data by setting the .spec.enabled field to false.

#!/usr/bin/sh
for backup_name in $(kubectl get backups -n [NAMESPACE] -o custom-columns='NAME:.metadata.name' --no-headers)
do
kubectl patch --type merge backup -n [NAMESPACE] "$backup_name" -p '{"spec":{"enabled": false}}'
done

Replace [NAMESPACE] with the namespace where the backups are located.

Pause the existing operator by setting the .spec.replicas field to 0.

Terminal window
$ kubectl scale deployment kannika-operator -n kannika-system --replicas=0

This will ensure no changes are made to the existing resources while the upgrade is in progress.

Install the new Custom Resource Definitions (CRDs) for 0.10.x.:

Terminal window
$ kubectl apply -f https://docs.kannika.io/refs/0.10.1/crd/kannika-crd-v1alpha.yml
Terminal window
$ helm install kannika-crd oci://quay.io/kannika/charts/kannika-crd \
--version 0.10.1

Update the existing resources to use the new fields and features available in 0.10.x.

Migrate Restores to use the new topics field

Section titled “Migrate Restores to use the new topics field”

The topics field has been introduced in 0.9.x to simplify the configuration of topic mappings in a restore, and replaces the deprecated .spec.config.mapping field.

restore.yaml
apiVersion: kannika.io/v1alpha
kind: Restore
spec:
source: source
sink: sink
config:
mapping:
source.topic:
target: target.topic
topics:
- source: source.topic
target: target.topic

The .spec.config.mapping field has been removed in 0.10.

Migrate Backups to segmentRolloverTriggers

Section titled “Migrate Backups to segmentRolloverTriggers”

The partitionRolloverTriggers field has been renamed to segmentRolloverTriggers in 0.9.x.

backup.yaml
apiVersion: kannika.io/v1alpha
kind: Backup
spec:
source: source
sink: sink
partitionRolloverTriggers:
# ...
segmentRolloverTriggers:
# ...

The .spec.partitionRolloverTriggers has been removed in 0.10.

Install application with updated Helm values

Section titled “Install application with updated Helm values”

The operator.config.schemaRegistryBackup.image field has been removed in favour of a new operator.config.schemaRegistryImage. In case you require a specific image for the Schema Registry Backup component, e.g. when using a private registry or a custom image, you can configure the image in the Helm chart:

values.yaml
operator:
config:
schemaRegistryBackup:
repository: quay.io/kannika/schema-registry-backup
tag: 0.9.0
schemaRegistryImage:
repository: quay.io/kannika/schema-registry-backup
tag: 0.10.1

Configure operator for SchemaRegistryRestore

Section titled “Configure operator for SchemaRegistryRestore”

New operator.config.schemaRegistryRestore settings are available for the new SchemaRegistryRestore resource.

In case you require specific settings for all pods running a SchemaRegistryBackup resource (security context, resource requirements, etc.), you can configure default pod settings in the Helm chart:

values.yaml
operator:
config:
schemaRegistryRestore:
pod:
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
9 collapsed lines
memory: "128Mi"
cpu: "500m"
# securityContext: ...
# nodeSelector: ...
# tolerations: ...
# affinity: ...
# serviceAccountName
# container: ...
# imagePullSecrets: ...

Install the new version of Kannika Armory using Helm:

Terminal window
$ helm upgrade --install kannika oci://quay.io/kannika/charts/kannika \
--create-namespace \
--namespace kannika-system \
--version 0.10.1 \
--values values.yaml

Once you have completed the upgrade process, enable backups again by setting the .spec.enabled field to true.

#!/usr/bin/sh
for backup_name in $(kubectl get backups -n [NAMESPACE] -o custom-columns='NAME:.metadata.name' --no-headers)
do
kubectl patch --type merge backup -n [NAMESPACE] "$backup_name" -p '{"spec":{"enabled": true}}'
done

Replace [NAMESPACE] with the namespace where the backups are located.

Verify that the upgrade was successful by checking the logs of the Kannika Armory components:

Terminal window
$ kubectl logs -n kannika-system -l app.kubernetes.io/name=kannika-operator
$ kubectl logs -n kannika-system -l app.kubernetes.io/name=kannika-api
$ kubectl logs -n kannika-system -l app.kubernetes.io/name=kannika-console

Verify that the backups are running as expected:

Terminal window
$ kubectl get backups -n [NAMESPACE]

If you encounter any issues during the upgrade process, do not hesitate to contact us on Slack.