Upgrading to 0.12.x
This guide explains how to upgrade from 0.11.x to 0.12.x.
Before you begin
Before you begin the upgrade process, ensure that you have a backup of your data.
The upgrade process will not touch the actual backup data, but might change the resource definitions and remove the Deployments of backups, the operator, and other components.
Checklist
Follow the steps in this checklist to upgrade to 0.12.x.
Pause backups
Pause all backups to prevent any changes to the backup data by setting the .spec.enabled
field to false
.
#!/usr/bin/shfor backup_name in $(kubectl get backups -n [NAMESPACE] -o custom-columns='NAME:.metadata.name' --no-headers)do kubectl patch --type merge backup -n [NAMESPACE] "$backup_name" -p '{"spec":{"enabled": false}}'done
#!/usr/bin/fishfor backup_name in (kubectl get backups -n [NAMESPACE] -o custom-columns='NAME:.metadata.name' --no-headers) kubectl patch --type merge backup -n [NAMESPACE] $backup_name -p '{"spec":{"enabled": false}}'end
Replace [NAMESPACE]
with the namespace where the backups are located.
Pause existing operator
Pause the existing operator by setting the .spec.replicas
field to 0
.
$ kubectl scale deployment kannika-operator -n kannika-system --replicas=0
This will ensure no changes are made to the existing resources while the upgrade is in progress.
If you are using a Helm chart to manage the operator,
you can set the operator.replicaCount
field to 0
in the Helm values file:
operator: replicaCount: 0
Upgrade CRDs
Install the new Custom Resource Definitions (CRDs) for 0.12.x.:
Using kubectl
$ kubectl apply -f https://docs.kannika.io/refs/0.12.0/crd/kannika-crd-v1alpha.yml
Using Helm
$ helm install kannika-crd oci://quay.io/kannika/charts/kannika-crd \ --version 0.12.0
Update API integration
If you are integrating with the API directly,
we recommend updating the integrations with the /restores
endpoints.
The key field of a restore called id
has been renamed to name
to align with the other resources.
The id
field is still supported, but is deprecated and will be removed in an upcoming release.
Install application with updated Helm values
This version of Kannika Armory introduces some important changes to the platform regarding Kubernetes namespaces and RBAC resources.
Configure RBAC resources
To improve the security of the cluster where Armory runs,
the new version of Armory defaults to Role
and RoleBinding
resources,
instead of cluster-wide ClusterRole
and ClusterRoleBinding
resources.
Note that these are not mutually exclusive, and you can enable both namespaced and cluster-wide RBAC resources at the same time.
Enabling namespaced RBAC
If you wish to enable or disable the namespaced RBAC resources,
use the following <component>.serviceAccount.rbac.enabled
fields:
api: serviceAccount: rbac: create: true
operator: serviceAccount: rbac: create: true
Roles will be created in the resource namespace and in the system namespace.
Enabling cluster-wide RBAC
If you wish to continue using cluster-wide RBAC resources,
set the following <component>.serviceAccount.clusterRbac.create
fields to true
:
api: serviceAccount: clusterRbac: create: true
operator: serviceAccount: clusterRbac: create: true
Configure resource namespace
To properly support namespaced RBAC resources,
the new version of Armory now allows you to specify the namespace that the whole platform will watch using global.kubernetes.namespace
for all components.
If the global.kubernetes.namespace
key is not set,
it will default to the release namespace (e.g. kannika-system
).
api: config: kubernetes: namespace: kannika-dataglobal: kurnetes: namespace: kannika-data
The configured namespace must be created separately before installing the application.
$ kubectl create namespace kannika-data
Install the new version of Kannika Armory
Install the new version of Kannika Armory using Helm:
$ helm upgrade --install kannika oci://quay.io/kannika/charts/kannika \ --create-namespace \ --namespace kannika-system \ --version 0.12.0 \ --values values.yaml
Enable backups again
Once you have completed the upgrade process,
enable backups again by setting the .spec.enabled
field to true
.
#!/usr/bin/shfor backup_name in $(kubectl get backups -n [NAMESPACE] -o custom-columns='NAME:.metadata.name' --no-headers)do kubectl patch --type merge backup -n [NAMESPACE] "$backup_name" -p '{"spec":{"enabled": true}}'done
#!/usr/bin/fishfor backup_name in (kubectl get backups -n [NAMESPACE] -o custom-columns='NAME:.metadata.name' --no-headers) kubectl patch --type merge backup -n [NAMESPACE] $backup_name -p '{"spec":{"enabled": true}}'end
Replace [NAMESPACE]
with the namespace where the backups are located.
Verify the installation
Verify that the upgrade was successful by checking the logs of the Kannika Armory components:
$ kubectl logs -n kannika-system -l app.kubernetes.io/name=kannika-operator # or operator$ kubectl logs -n kannika-system -l app.kubernetes.io/name=kannika-api # or api$ kubectl logs -n kannika-system -l app.kubernetes.io/name=kannika-console # or console
Verify that the backups are running as expected:
$ kubectl get backups -n [NAMESPACE]$ kubectl get schemaregistrybackups -n [NAMESPACE]
If you encounter any issues during the upgrade process, do not hesitate to contact us on Slack.