Backup with S3 storage
This tutorial shows how to configure a Backup that stores data in S3-compatible storage using MinIO.
Challenge
Section titled “Challenge”Volume Storage is the simplest way to get started with backups, but it stores data on a local Kubernetes volume. This limits your options for offsite copies, cloud portability, and integration with existing object storage infrastructure.
Using S3-compatible storage gives you:
- Offsite backups that are independent of the Kubernetes cluster
- Cloud portability across AWS S3, MinIO, Ceph, and other S3-compatible services
- Scalable storage that grows with your data without managing volumes
- Integration with existing object storage infrastructure and tooling
Solution
Section titled “Solution”Kannika Armory supports S3 Storage natively, including S3-compatible services like MinIO. By pointing a Storage resource at a MinIO endpoint, backups are written directly to an S3 bucket.
Prerequisites
Section titled “Prerequisites”- A Kannika Armory instance available, running on a Kubernetes environment.
- Local installation of the
kubectlbinary.
Refer to the Setup section to set up the lab environment.
Scenario
Section titled “Scenario”In this tutorial, you will simulate an e-commerce company that backs up order data to S3-compatible storage:
- Data: Order records from an e-commerce platform
- Storage: A MinIO instance acting as S3-compatible object storage
- Goal: Configure a Backup that streams Kafka topic data to a MinIO bucket
The setup script provisions MinIO alongside the Kafka cluster and creates the Kubernetes resources needed for the backup.
Run the setup script:
curl -fsSL https://raw.githubusercontent.com/kannika-io/armory-examples/main/install.sh | bash -s -- s3-minio-backupOr clone the armory-examples repository:
git clone https://github.com/kannika-io/armory-examples.gitcd armory-examples./setup s3-minio-backupThis sets up:
Kubernetes cluster: kannika-kind├── Namespace: kannika-system│ └── Kannika Armory└── Namespace: kannika-data ├── EventHub: prod-kafka → kafka-source:9092 ├── Storage: s3-minio → minio:9000/kannika-backup ├── Credentials: s3-minio-creds └── Backup: prod-backup
Kafka: kafka-source:9092 (localhost:9092)└── Topic: orders
MinIO: minio:9000 (localhost:9000)├── Console: localhost:9001└── Bucket: kannika-backupStep 1: Inspect the S3 Storage configuration
Section titled “Step 1: Inspect the S3 Storage configuration”The Storage resource tells Kannika Armory where to write backup data.
For S3-compatible storages like MinIO,
you need to set the endpoint and forcePathStyle fields.
# https://github.com/kannika-io/armory-examples/blob/main/tutorials/s3-minio-backup/k8s/s3-minio.storage.yamlapiVersion: kannika.io/v1alphakind: Storagemetadata: name: s3-minio namespace: kannika-dataspec: s3: bucket: kannika-backup endpoint: http://minio:9000 forcePathStyle: trueThe key fields for S3-compatible storages:
endpointpoints to the MinIO S3 API. Since MinIO runs on the same Docker network as the Kind cluster, pods can reach it atminio:9000.forcePathStylemust betruefor MinIO. MinIO does not support virtual-hosted-style addressing (e.g.bucket.minio:9000), so the bucket name is included in the URL path instead.
See the S3 Storage reference for all configuration options.
Step 2: Inspect the Credentials
Section titled “Step 2: Inspect the Credentials”The Credentials resource provides authentication for the S3 storage. For MinIO, this uses the same AWS-style access key and secret key format.
# https://github.com/kannika-io/armory-examples/blob/main/tutorials/s3-minio-backup/k8s/s3-minio.credentials.yamlapiVersion: kannika.io/v1alphakind: Credentialsmetadata: name: s3-minio-creds namespace: kannika-dataspec: aws: accessKeyIdFrom: secretKeyRef: name: s3-minio-creds key: accessKeyId secretAccessKeyFrom: secretKeyRef: name: s3-minio-creds key: secretAccessKey---apiVersion: v1kind: Secrettype: Opaquemetadata: name: s3-minio-creds namespace: kannika-datadata: accessKeyId: bWluaW9hZG1pbg== # minioadmin secretAccessKey: bWluaW9hZG1pbg== # minioadminThe Credentials reference a Kubernetes Secret containing the MinIO access key and secret key. This is the same format used for AWS S3 authentication.
See AWS Authentication for more details.
Step 3: Inspect the Backup
Section titled “Step 3: Inspect the Backup”The Backup ties everything together. It reads data from the EventHub (Kafka) and writes it to the Storage (MinIO).
# https://github.com/kannika-io/armory-examples/blob/main/tutorials/s3-minio-backup/k8s/prod-backup.backup.yamlapiVersion: kannika.io/v1alphakind: Backupmetadata: name: prod-backup namespace: kannika-dataspec: source: prod-kafka sink: s3-minio sinkCredentialsFrom: credentialsRef: name: s3-minio-creds enabled: true streams: - topic: ordersThe sinkCredentialsFrom field references the Credentials resource created in the previous step.
This tells the Backup to authenticate with MinIO using the provided access key and secret key.
Step 4: Verify the Backup is streaming
Section titled “Step 4: Verify the Backup is streaming”Check the Backup status:
kubectl get backup prod-backup -n kannika-dataNAME STATUSprod-backup StreamingThe Streaming status means the Backup is actively reading from Kafka and writing to MinIO.
Check the Storage status to confirm data is being written:
kubectl get storage s3-minio -n kannika-dataNAME STATUSs3-minio ReadyStep 5: Browse the backup data in MinIO
Section titled “Step 5: Browse the backup data in MinIO”Open the MinIO console in your browser:
http://localhost:9001Log in with username minioadmin and password minioadmin.
Navigate to the kannika-backup bucket to see the backup data.
You should see a directory structure created by Kannika Armory
containing the backed-up records from the orders topic.
Alternatively, verify the bucket contents from the command line using the MinIO client:
docker run --rm --network minio --entrypoint /bin/sh quay.io/minio/mc -c \ "mc alias set local http://minio:9000 minioadmin minioadmin > /dev/null && mc ls local/kannika-backup --recursive"Cleanup
Section titled “Cleanup”To remove all tutorial resources:
./teardownSummary
Section titled “Summary”In this tutorial, you learned how to:
- Configure an S3 Storage resource for MinIO with
endpointandforcePathStyle - Set up AWS-style Credentials for MinIO authentication
- Create a Backup that streams Kafka data to S3-compatible storage
- Verify backup data in the MinIO console and via the command line
This same configuration works with any S3-compatible storage.
To use AWS S3 instead of MinIO,
remove the endpoint and forcePathStyle fields
and update the Credentials with your AWS access keys.
See the S3 Storage reference for details.