Skip to content

    Upgrading to 0.9.x

    This guide explains how to upgrade from 0.8.x to 0.9.x.

    Before you begin

    Before you begin the upgrade process, ensure that you have a backup of your data.

    The upgrade process will not touch the actual backup data, but might change the resource definitions and recreate the Deployments of backups, the operator, and other components.

    Checklist

    Follow the steps in this checklist to upgrade to 0.9.x.

    Pause backups

    Pause all backups to prevent any changes to the backup data by setting the .spec.enabled field to false.

    #!/usr/bin/sh
    for backup_name in $(kubectl get backups -n [NAMESPACE] -o custom-columns='NAME:.metadata.name' --no-headers)
    do
    kubectl patch --type merge backup -n [NAMESPACE] "$backup_name" -p '{"spec":{"enabled": false}}'
    done

    Replace [NAMESPACE] with the namespace where the backups are located.

    Pause existing operator

    Pause the existing operator by setting the .spec.replicas field to 0.

    Terminal window
    $ kubectl scale deployment kannika-operator -n kannika-system --replicas=0

    This will ensure no changes are made to the existing resources while the upgrade is in progress.

    Upgrade CRDs

    Install the new Custom Resource Definitions (CRDs) for 0.9.x.:

    Using kubectl
    Terminal window
    $ kubectl apply -f https://docs.kannika.io/refs/0.9.1/crd/kannika-crd-v1alpha.yml
    Using Helm
    Terminal window
    $ helm install kannika-crd oci://quay.io/kannika/charts/kannika-crd \
    --version 0.9.1

    Update resources

    Update the existing resources to use the new fields and features available in 0.9.x.

    Migrate Restores to use the new topics field

    The topics field has been introduced in 0.9.x to simplify the configuration of topic mappings in a restore, and replaces the deprecated .spec.config.mapping field.

    restore.yaml
    apiVersion: kannika.io/v1alpha
    kind: Restore
    spec:
    source: source
    sink: sink
    config:
    mapping:
    source.topic:
    target: target.topic
    topics:
    - source: source.topic
    target: target.topic

    The deprecated way using the .spec.config.mapping map will still work in 0.9.x, but the API will automatically migrate the existing resources to the new format when making changes to the resource.

    The .spec.config.mapping will be removed in 0.10.

    Migrate Backups to segmentRolloverTriggers

    The partitionRolloverTriggers field has been renamed to segmentRolloverTriggers in 0.9.x.

    backup.yaml
    apiVersion: kannika.io/v1alpha
    kind: Backup
    spec:
    source: source
    sink: sink
    partitionRolloverTriggers:
    # ...
    segmentRolloverTriggers:
    # ...

    The deprecated way using the .spec.partitionRolloverTriggers map will still work in 0.9.x, but the API will automatically migrate the existing resources to the new format when making changes to the resource.

    The .spec.partitionRolloverTriggers will be removed in 0.10.

    Update plugin configuration with topic selectors

    If you are using plugins, you will need to update the plugin configuration and specification to use the new topic selectors to apply the plugin to specific topics.

    Install application with updated Helm values

    New operator.config.schemaRegistryBackup settings are available for the new SchemaRegistryBackup resource.

    Configure registry backup image

    A new quay.io/kannika/kannika-registry-backup:0.9.x image is available for the new SchemaRegistryBackup resource.

    In case you are using a private container registry, you can override the image in the Helm chart using a new operator.config.schemaRegistryBackup.image setting:

    values.yaml
    operator:
    config:
    schemaRegistryBackup:
    image: yourregistry.com/kannika/kannika-registry-backup
    tag: 0.9.1
    pullPolicy: IfNotPresent

    Configure default settings for registry backup pods

    In case you require specific settings for all pods running a SchemaRegistryBackup resource (security context, resource requirements, etc.), you can configure default pod settings in the Helm chart:

    values.yaml
    operator:
    config:
    schemaRegistryBackup:
    pod:
    resources:
    requests:
    memory: "64Mi"
    cpu: "250m"
    limits:
    9 collapsed lines
    memory: "128Mi"
    cpu: "500m"
    # securityContext: ...
    # nodeSelector: ...
    # tolerations: ...
    # affinity: ...
    # serviceAccountName
    # container: ...
    # imagePullSecrets: ...

    Install the new version of Kannika Armory

    Install the new version of Kannika Armory using Helm:

    Terminal window
    $ helm upgrade --install kannika oci://quay.io/kannika/charts/kannika \
    --create-namespace \
    --namespace kannika-system \
    --version 0.9.1 \
    --values values.yaml

    Enable backups again

    Once you have completed the upgrade process, enable backups again by setting the .spec.enabled field to true.

    #!/usr/bin/sh
    for backup_name in $(kubectl get backups -n [NAMESPACE] -o custom-columns='NAME:.metadata.name' --no-headers)
    do
    kubectl patch --type merge backup -n [NAMESPACE] "$backup_name" -p '{"spec":{"enabled": true}}'
    done

    Replace [NAMESPACE] with the namespace where the backups are located.

    Verify the installation

    Verify that the upgrade was successful by checking the logs of the Kannika Armory components:

    Terminal window
    $ kubectl logs -n kannika-system -l app.kubernetes.io/name=kannika-operator
    $ kubectl logs -n kannika-system -l app.kubernetes.io/name=kannika-api
    $ kubectl logs -n kannika-system -l app.kubernetes.io/name=kannika-console

    Verify that the backups are running as expected:

    Terminal window
    $ kubectl get backups -n [NAMESPACE]

    If you encounter any issues during the upgrade process, do not hesitate to contact us on Slack.