Announcing the 0.13.0 Release
This release contains an improved backup metrics endpoint, audit logging, Azure IAM support and more.
Installation
Section titled “Installation”For new installations, see the Installation guide.
For upgrading existing installations, see the associated Upgrading to 0.13.x guide.
New backup metrics endpoint
Section titled “New backup metrics endpoint”A new REST endpoint is available to retrieve metrics for a specific backup.
The endpoint is available at /rest/backups/{backup}/metrics.
Initial calls to the metrics endpoint may take a few seconds to complete, as metric collectors are spun up per Backup Pod when metrics are requested.
Example output:
{ "topics": [33 collapsed lines
{ "name": "my-topic", "status": "RUNNING", # ENABLED, CREATED, RUNNING, PAUSED, BACKOFF, FAILED "status_message": null, # Additional information about the status of the topic, such as error messages. "ingestion_rate": { "bytes_per_second": 1234 # Bytes per second ingested for this topic. }, "message_count": 324018, # Total number of messages in the backup for this topic. "compressed_size": 2994527, # Total size of storage in bytes for this topic (including compression if applicable). "uncompressed_size": 9299006, # Total size of all messages in bytes (incl. header/body/key). "progression": { "backup_rate": 1, # The increase in offsets per second that are backed up for this topic. "produce_rate": 10, # The increase in offsets per second caused by the producer. "catch_up_time_seconds": 0 # Estimate of time to catch up to the producer. }, "offset_lag": 0, # The total amount of lag between the producer and the backup, for all partitions. "partitions": [ { "number": 0, # Partitions are sorted by number, allowing for easy comparison. "sink_offset": 324017, # The offset of the last message that was backed up. This will typically be one less than the consumer offset. "consumer_offset": 324018, # The next offset that will be consumed. "max_offset": 324018 # The high watermark of the partition. }, { "number": 1, "sink_offset": 223122, "consumer_offset": 223123, "max_offset": 223123 } ], } ]}Support for custom security secrets
Section titled “Support for custom security secrets”The Helm chart now supports custom security secrets for the API. This provides more flexibility and control over where the API’s credentials are stored and managed. This can be particularly useful in environments where strict security policies require secrets to be stored in a specific location or managed by a specific tool like an external secret manager.
api: config: security: enabled: true secret: create: false # Disable default secret creation name: my-custom-secret # Optional, defaults to release name # usernameKey: username # Optional, defaults to `username` # passwordKey: password # Optional, defaults to `password` # namespace: kannika-system # Optional, defaults to release namespaceImproved audit logging
Section titled “Improved audit logging”The API server has been updated to ensure consistent user attribution for all GraphQL API interactions. This refinement ensures that the requester’s identity is correctly captured and preserved throughout the entire lifecycle of a query, including the resolution of complex graph structures.
The diagnostic context key has been updated from user_id to user.
In case security is disabled, the user will now be logged as <anonymous>.
Propagate labels and annotations
Section titled “Propagate labels and annotations”Resources that generate child resources can now propagate labels and annotations from the parent. This ensures consistent metadata for filtering and monitoring across your entire infrastructure.
Metadata propagation
Section titled “Metadata propagation”Metadata labels and annotations are opt-in to avoid breaking existing deployments. You must specify which metadata to propagate using control annotations:
io.kannika/propagate-labels- controls label propagationio.kannika/propagate-annotations- controls annotation propagation
Spec propagation
Section titled “Spec propagation”Labels and annotations can now be in defined in labels and annotations fields in the spec of resources.
These are always propagated since they represent explicit user intent,
and always take precedence over metadata values from the parent.
Example usage
Section titled “Example usage”apiVersion: kannika.io/v1alphakind: Backupmetadata: name: kafka-backup annotations: prometheus.io/scrape: "true" prometheus.io/port: "9000" prometheus.io/path: "/metrics" # Explicitly opt-in to propagate specific metadata io.kannika/propagate-labels: "environment, team" io.kannika/propagate-annotations: "prometheus.io/scrape, prometheus.io/port, prometheus.io/path" labels: environment: production team: platform
spec: source: my-eventhub sink: my-storage # Spec labels and annotations are always propagated and take precedence over metadata values labels: component: backup-logic annotations: monitoring: enabledWildcard propagation
Section titled “Wildcard propagation”You can also use a wildcard * to propagate all metadata labels or annotations in a single step.
This ensures all custom metadata is inherited by child resources without listing keys individually.
Note that reserved prefixes such as io.kannika/ and kubectl.kubernetes.io/ are blocked from propagation to avoid conflicts.
The labels pod-template-hash and controller-revision-hash are also blocked to prevent issues with Kubernetes controllers.
apiVersion: kannika.io/v1alphakind: Backupmetadata: name: kafka-backup-wildcard annotations: custom.monitoring/type: "latency" # Propagate all valid custom labels and annotations io.kannika/propagate-labels: "*" io.kannika/propagate-annotations: "*" labels: environment: production team: platform # These labels will NOT propagate because they use a reserved prefix io.kannika/internal: "blocked" kubectl.kubernetes.io/last-applied-configuration: "..."spec: source: my-eventhub sink: my-storageAzure IAM
Section titled “Azure IAM”Kannika Armory now supports Azure IAM authentication for Azure Blob Storage using workload identities. This allows using Azure Managed Identities to authenticate to Azure Blob Storage without the need for explicit credentials in the configuration.
To use workload identities, you need to create a Service Account in your Kubernetes cluster that is linked to an Azure Managed Identity. This Service Account can then be referenced in the storage configuration to enable IAM authentication.
apiVersion: v1kind: ServiceAccountmetadata: name: my-service-account annotations: azure.workload.identity/client-id: "..."Then, reference the Service Account in your resource configuration
and enable workload identity usage by propagating the azure.workload.identity/use: true label to the Pod.
apiVersion: kannika.io/v1alphakind: Backupmetadata: name: azure-blob-backup annotations: io.kannika/propagate-labels: "azure.workload.identity/use" labels: azure.workload.identity/use: truespec: source: my-eventhub sink: storage-azure-blob # Reference the Service Account for workload identity serviceAccountName: my-service-account # Alternatively, specify the label directly in the spec instead of metadata labels: azure.workload.identity/use: "true"Helm charts repository
Section titled “Helm charts repository”The source code of our Helm charts is now available from https://github.com/kannika-io/helm-charts. A copy of the Helm charts will be made available in the repository with each release. Development and maintenance of these charts will still be done internally.
In addition to the Helm charts’ source,
we also provide an aggregate values.yaml file with all possible configuration options for the umbrella chart.
This file can be used as a starting point for customizing the Helm chart values.
Bugfixes
Section titled “Bugfixes”-
API: Fixed a bug in the REST API when configuring a restore with a time window. The validation logic incorrectly flagged the combination of
restore_from_date_timeandrestore_until_date_timeas invalid, which resulted in amust be after restore_from_date_timeerror. This behaviour has been corrected, and both parameters can now be used together to define a precise restore window via the REST API. -
API: Fixed the OpenAPI specification to contain all REST endpoints. Specifically, the
GET /rest/backupsendpoint was not included in the specification. -
Console: Fixed the wrong metric being shown for the restored messages on the Restore page.
Release notes
Section titled “Release notes”For a full list of changes, see the Changelog.