Skip to content

Upgrading to 0.7.x

This guide explains how to upgrade from 0.6.x to 0.7.x.

Before you begin the upgrade process, ensure that you have a backup of your data.

The upgrade process will not touch the actual backup data, but will change the resource definitions.

Follow the steps in this checklist to upgrade to 0.7.x.

Pause all backups to prevent any changes to the backup data by setting the .spec.enabled field to false.

#!/usr/bin/sh
for backup_name in $(kubectl get backups -n [NAMESPACE] -o custom-columns='NAME:.metadata.name' --no-headers)
do
kubectl patch --type merge backup -n [NAMESPACE] "$backup_name" -p '{"spec":{"enabled": false}}'
done

Replace [NAMESPACE] with the namespace where the backups are located.

Pause the existing operator by setting the .spec.replicas field to 0.

Terminal window
$ kubectl scale deployment kannika-operator -n kannika-system --replicas=0

Install the new Custom Resource Definitions (CRDs) for 0.7.x.:

Terminal window
$ kubectl apply -f https://docs.kannika.io/refs/0.7.2/crd/kannika-crd-v1alpha.yml
Terminal window
$ helm install kannika-crd oci://quay.io/kannika/charts/kannika-crd \
--version 0.7.2

Remove .spec.storage from backups and restores

Section titled “Remove .spec.storage from backups and restores”

Before making any of the below changes, remove the .spec.storage field from all backups and restores, when working outside of Kubernetes.

This field was automatically set to the default storage class in previous versions by the operator, but has been removed in 0.7.x.

In 0.7.x, support for the legacy Endpoint Secrets have been removed in favor of the new EventHub and Storage resources. Storage resources were already available in earlier versions, but are now required to be fully migrated to.

To get a list of all Endpoint Secrets in the cluster, run the following command:

Terminal window
$ kubectl get secrets \
--field-selector type=kannika.io/endpoint \
--all-namespaces

For each Endpoint Secret, create a new EventHub or Storage with the same name, and copy the data from the Endpoint Secret to the new resource.

Migrate Volume Endpoints to Volume Storages

Section titled “Migrate Volume Endpoints to Volume Storages”

To migrate a Volume Endpoint, you must create a Volume Storage instead.

Example:

apiVersion: v1
kind: Secret
type: kannika.io/endpoint
metadata:
name: "my-volume"
labels:
io.kannika/resource-id: "io.kannika.endpoint.my-volume"
io.kannika/resource-type: "volume"
data:
capacity: MTBHaQ== # 10Gi

Migrate Kafka Endpoints to Kafka EventHubs

Section titled “Migrate Kafka Endpoints to Kafka EventHubs”

To migrate a Kafka Endpoint, you must create a Kafka EventHub.

For authentication, you must create new Credentials resources:

Example:

apiVersion: v1
kind: Secret
type: kannika.io/endpoint
metadata:
name: "production-kafka"
labels:
io.kannika/resource-id: "io.kannika.endpoint.production-kafka"
io.kannika/resource-type: "kafka"
data:
bootstrap.servers: "YnJva2VyMTo5MDkyLGJyb2tlcjI6OTA5Mg==" # broker1:9092,broker2:9092
consumer-group-id: "a2FubmlrYQ==" # kannika
security.protocol: "U0FTTF9TU0w=" # SASL_SSL
auth.sasl.plain.username: "Ym9i" # bob
auth.sasl.plain.password: "MTIzNDU2" # 123456

If you created new Storage or EventHub resources with different names than the Endpoint Secrets, make sure to update the names in the source and sink fields in the backups and restores.

Example:

apiVersion: kannika.io/v1alpha
kind: Backup
metadata:
name: my-backup
spec:
source: production-kafka-eventhub # References the EventHub now instead of the Endpoint Secret
source: production-kafka-endpoint
sink: my-volume-storage
sink: my-volume-endpoint

Define credentials on backups and restores

Section titled “Define credentials on backups and restores”

If you created new Credentials resources for the new EventHub or Storage resources, you must update the backups and restores to use the new credentials by updating the sourceCredentialsFrom and sinkCredentialsFrom fields accordingly.

Old restores do not require any changes, unless you want to re-run them.

Example for backups and restores with the new credentials for the Kafka EventHub:

apiVersion: kannika.io/v1alpha
kind: Backup
metadata:
name: my-backup
spec:
source: production-kafka
sourceCredentialsFrom:
credentialsRef:
name: production-kafka-credentials # References the new Credentials
sink: my-volume
# sinkCredentialsFrom: ...
apiVersion: kannika.io/v1alpha
kind: Restore
metadata:
name: my-restore
spec:
source: my-volume
# sourceCredentialsFrom: ...
sink: production-kafka
sinkCredentialsFrom:
credentialsRef:
name: production-kafka-credentials # References the new Credentials

Install application with updated Helm values

Section titled “Install application with updated Helm values”

The new version of Kannika Armory no longer defines any default resource requirements by default in the Helm chart or application.

Next to this, the platform also no longer defines a default security context. Therefore, you must define these settings yourself in the Helm chart or application.

Note that these settings can also be defined on each Backup Pod or Restore Pod individually.

The new version also supports defining default Tolerations, default Affinity, and default NodeSelector.

Example Helm chart values:

values.yaml
operator:
securityContext:
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
tolerations: []
70 collapsed lines
affinity: {}
nodeSelector: {}
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
# cpu: 200m
memory: 256Mi
config:
# Default settings for backups and restore pods
pod:
securityContext: {}
tolerations: []
affinity: {}
nodeSelector: {}
resources:
requests:
cpu: 100m
memory: 512Mi
limits:
# cpu: 200m
memory: 512Mi
container:
securityContext:
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
api:
securityContext:
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
tolerations: []
affinity: {}
nodeSelector: {}
resources:
requests:
cpu: 100m
memory: 1Gi
limits:
# cpu: 200m
memory: 1Gi
console:
securityContext:
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
tolerations: []
affinity: {}
nodeSelector: {}
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
# cpu: 200m
memory: 256Mi

Install the new version of Kannika Armory using Helm:

Terminal window
$ helm upgrade --install kannika oci://quay.io/kannika/charts/kannika \
--create-namespace \
--namespace kannika-system \
--version 0.7.2 \
--values values.yaml

For more installation options, see Installation.

Once you have completed the upgrade process, enable backups again by setting the .spec.enabled field to true.

#!/usr/bin/sh
for backup_name in $(kubectl get backups -n [NAMESPACE] -o custom-columns='NAME:.metadata.name' --no-headers)
do
kubectl patch --type merge backup -n [NAMESPACE] "$backup_name" -p '{"spec":{"enabled": true}}'
done

Replace [NAMESPACE] with the namespace where the backups are located.

Verify that the upgrade was successful by checking the logs of the Kannika Armory components:

Terminal window
$ kubectl logs -n kannika-system -l app.kubernetes.io/name=kannika-operator
$ kubectl logs -n kannika-system -l app.kubernetes.io/name=kannika-api
$ kubectl logs -n kannika-system -l app.kubernetes.io/name=kannika-console

Verify that the backups are running as expected:

Terminal window
$ kubectl get backups -n [NAMESPACE]

If you encounter any issues during the upgrade process, do not hesitate to contact us on Slack.