Upgrading to 0.9.x
This guide explains how to upgrade from 0.8.x to 0.9.x.
Before you begin
Section titled “Before you begin”Before you begin the upgrade process, ensure that you have a backup of your data.
The upgrade process will not touch the actual backup data, but might change the resource definitions and recreate the Deployments of backups, the operator, and other components.
Checklist
Section titled “Checklist”Follow the steps in this checklist to upgrade to 0.9.x.
Pause backups
Section titled “Pause backups”Pause all backups to prevent any changes to the backup data by setting the .spec.enabled field to false.
#!/usr/bin/shfor backup_name in $(kubectl get backups -n [NAMESPACE] -o custom-columns='NAME:.metadata.name' --no-headers)do kubectl patch --type merge backup -n [NAMESPACE] "$backup_name" -p '{"spec":{"enabled": false}}'done#!/usr/bin/fishfor backup_name in (kubectl get backups -n [NAMESPACE] -o custom-columns='NAME:.metadata.name' --no-headers) kubectl patch --type merge backup -n [NAMESPACE] $backup_name -p '{"spec":{"enabled": false}}'endReplace [NAMESPACE] with the namespace where the backups are located.
Pause existing operator
Section titled “Pause existing operator”Pause the existing operator by setting the .spec.replicas field to 0.
$ kubectl scale deployment kannika-operator -n kannika-system --replicas=0This will ensure no changes are made to the existing resources while the upgrade is in progress.
Upgrade CRDs
Section titled “Upgrade CRDs”Install the new Custom Resource Definitions (CRDs) for 0.9.x.:
Using kubectl
Section titled “Using kubectl”$ kubectl apply -f https://docs.kannika.io/refs/0.9.1/crd/kannika-crd-v1alpha.ymlUsing Helm
Section titled “Using Helm”$ helm install kannika-crd oci://quay.io/kannika/charts/kannika-crd \ --version 0.9.1Update resources
Section titled “Update resources”Update the existing resources to use the new fields and features available in 0.9.x.
Migrate Restores to use the new topics field
Section titled “Migrate Restores to use the new topics field”The topics field has been introduced in 0.9.x to simplify the configuration of topic mappings in a restore,
and replaces the deprecated .spec.config.mapping field.
apiVersion: kannika.io/v1alphakind: Restorespec: source: source sink: sink config: mapping: source.topic: target: target.topic topics: - source: source.topic target: target.topicThe deprecated way using the .spec.config.mapping map will still work in 0.9.x,
but the API will automatically migrate the existing resources to the new format when making changes to the resource.
The .spec.config.mapping will be removed in 0.10.
Migrate Backups to segmentRolloverTriggers
Section titled “Migrate Backups to segmentRolloverTriggers”The partitionRolloverTriggers field has been renamed to segmentRolloverTriggers in 0.9.x.
apiVersion: kannika.io/v1alphakind: Backupspec: source: source sink: sink partitionRolloverTriggers: # ... segmentRolloverTriggers: # ...The deprecated way using the .spec.partitionRolloverTriggers map will still work in 0.9.x,
but the API will automatically migrate the existing resources to the new format when making changes to the resource.
The .spec.partitionRolloverTriggers will be removed in 0.10.
Update plugin configuration with topic selectors
Section titled “Update plugin configuration with topic selectors”If you are using plugins, you will need to update the plugin configuration and specification to use the new topic selectors to apply the plugin to specific topics.
Install application with updated Helm values
Section titled “Install application with updated Helm values”New operator.config.schemaRegistryBackup settings are available for the new SchemaRegistryBackup resource.
Configure registry backup image
Section titled “Configure registry backup image”A new quay.io/kannika/kannika-registry-backup:0.9.x image is available for the new SchemaRegistryBackup resource.
In case you are using a private container registry,
you can override the image in the Helm chart
using a new operator.config.schemaRegistryBackup.image setting:
operator: config: schemaRegistryBackup: image: yourregistry.com/kannika/kannika-registry-backup tag: 0.9.1 pullPolicy: IfNotPresentConfigure default settings for registry backup pods
Section titled “Configure default settings for registry backup pods”In case you require specific settings for all pods running a SchemaRegistryBackup resource (security context, resource requirements, etc.), you can configure default pod settings in the Helm chart:
operator: config: schemaRegistryBackup: pod: resources: requests: memory: "64Mi" cpu: "250m" limits:9 collapsed lines
memory: "128Mi" cpu: "500m" # securityContext: ... # nodeSelector: ... # tolerations: ... # affinity: ... # serviceAccountName # container: ... # imagePullSecrets: ...Install the new version of Kannika Armory
Section titled “Install the new version of Kannika Armory”Install the new version of Kannika Armory using Helm:
$ helm upgrade --install kannika oci://quay.io/kannika/charts/kannika \ --create-namespace \ --namespace kannika-system \ --version 0.9.1 \ --values values.yamlEnable backups again
Section titled “Enable backups again”Once you have completed the upgrade process,
enable backups again by setting the .spec.enabled field to true.
#!/usr/bin/shfor backup_name in $(kubectl get backups -n [NAMESPACE] -o custom-columns='NAME:.metadata.name' --no-headers)do kubectl patch --type merge backup -n [NAMESPACE] "$backup_name" -p '{"spec":{"enabled": true}}'done#!/usr/bin/fishfor backup_name in (kubectl get backups -n [NAMESPACE] -o custom-columns='NAME:.metadata.name' --no-headers) kubectl patch --type merge backup -n [NAMESPACE] $backup_name -p '{"spec":{"enabled": true}}'endReplace [NAMESPACE] with the namespace where the backups are located.
Verify the installation
Section titled “Verify the installation”Verify that the upgrade was successful by checking the logs of the Kannika Armory components:
$ kubectl logs -n kannika-system -l app.kubernetes.io/name=kannika-operator$ kubectl logs -n kannika-system -l app.kubernetes.io/name=kannika-api$ kubectl logs -n kannika-system -l app.kubernetes.io/name=kannika-consoleVerify that the backups are running as expected:
$ kubectl get backups -n [NAMESPACE]If you encounter any issues during the upgrade process, do not hesitate to contact us on Slack.