Skip to content

Backup with S3 storage

This tutorial shows how to configure a Backup that stores data in S3-compatible storage using MinIO.

Volume Storage is the simplest way to get started with backups, but it stores data on a local Kubernetes volume. This limits your options for offsite copies, cloud portability, and integration with existing object storage infrastructure.

Using S3-compatible storage gives you:

  • Offsite backups that are independent of the Kubernetes cluster
  • Cloud portability across AWS S3, MinIO, Ceph, and other S3-compatible services
  • Scalable storage that grows with your data without managing volumes
  • Integration with existing object storage infrastructure and tooling

Kannika Armory supports S3 Storage natively, including S3-compatible services like MinIO. By pointing a Storage resource at a MinIO endpoint, backups are written directly to an S3 bucket.

  • A Kannika Armory instance available, running on a Kubernetes environment.
  • Local installation of the kubectl binary.

Refer to the Setup section to set up the lab environment.

In this tutorial, you will simulate an e-commerce company that backs up order data to S3-compatible storage:

  • Data: Order records from an e-commerce platform
  • Storage: A MinIO instance acting as S3-compatible object storage
  • Goal: Configure a Backup that streams Kafka topic data to a MinIO bucket

The setup script provisions MinIO alongside the Kafka cluster and creates the Kubernetes resources needed for the backup.

Run the setup script:

Terminal window
curl -fsSL https://raw.githubusercontent.com/kannika-io/armory-examples/main/install.sh | bash -s -- s3-minio-backup

Or clone the armory-examples repository:

Terminal window
git clone https://github.com/kannika-io/armory-examples.git
cd armory-examples
./setup s3-minio-backup

This sets up:

Kubernetes cluster: kannika-kind
├── Namespace: kannika-system
│ └── Kannika Armory
└── Namespace: kannika-data
├── EventHub: prod-kafka → kafka-source:9092
├── Storage: s3-minio → minio:9000/kannika-backup
├── Credentials: s3-minio-creds
└── Backup: prod-backup
Kafka: kafka-source:9092 (localhost:9092)
└── Topic: orders
MinIO: minio:9000 (localhost:9000)
├── Console: localhost:9001
└── Bucket: kannika-backup

Step 1: Inspect the S3 Storage configuration

Section titled “Step 1: Inspect the S3 Storage configuration”

The Storage resource tells Kannika Armory where to write backup data. For S3-compatible storages like MinIO, you need to set the endpoint and forcePathStyle fields.

# https://github.com/kannika-io/armory-examples/blob/main/tutorials/s3-minio-backup/k8s/s3-minio.storage.yaml
apiVersion: kannika.io/v1alpha
kind: Storage
metadata:
name: s3-minio
namespace: kannika-data
spec:
s3:
bucket: kannika-backup
endpoint: http://minio:9000
forcePathStyle: true

The key fields for S3-compatible storages:

  • endpoint points to the MinIO S3 API. Since MinIO runs on the same Docker network as the Kind cluster, pods can reach it at minio:9000.
  • forcePathStyle must be true for MinIO. MinIO does not support virtual-hosted-style addressing (e.g. bucket.minio:9000), so the bucket name is included in the URL path instead.

See the S3 Storage reference for all configuration options.

The Credentials resource provides authentication for the S3 storage. For MinIO, this uses the same AWS-style access key and secret key format.

# https://github.com/kannika-io/armory-examples/blob/main/tutorials/s3-minio-backup/k8s/s3-minio.credentials.yaml
apiVersion: kannika.io/v1alpha
kind: Credentials
metadata:
name: s3-minio-creds
namespace: kannika-data
spec:
aws:
accessKeyIdFrom:
secretKeyRef:
name: s3-minio-creds
key: accessKeyId
secretAccessKeyFrom:
secretKeyRef:
name: s3-minio-creds
key: secretAccessKey
---
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: s3-minio-creds
namespace: kannika-data
data:
accessKeyId: bWluaW9hZG1pbg== # minioadmin
secretAccessKey: bWluaW9hZG1pbg== # minioadmin

The Credentials reference a Kubernetes Secret containing the MinIO access key and secret key. This is the same format used for AWS S3 authentication.

See AWS Authentication for more details.

The Backup ties everything together. It reads data from the EventHub (Kafka) and writes it to the Storage (MinIO).

# https://github.com/kannika-io/armory-examples/blob/main/tutorials/s3-minio-backup/k8s/prod-backup.backup.yaml
apiVersion: kannika.io/v1alpha
kind: Backup
metadata:
name: prod-backup
namespace: kannika-data
spec:
source: prod-kafka
sink: s3-minio
sinkCredentialsFrom:
credentialsRef:
name: s3-minio-creds
enabled: true
streams:
- topic: orders

The sinkCredentialsFrom field references the Credentials resource created in the previous step. This tells the Backup to authenticate with MinIO using the provided access key and secret key.

Check the Backup status:

Terminal window
kubectl get backup prod-backup -n kannika-data
NAME STATUS
prod-backup Streaming

The Streaming status means the Backup is actively reading from Kafka and writing to MinIO.

Check the Storage status to confirm data is being written:

Terminal window
kubectl get storage s3-minio -n kannika-data
NAME STATUS
s3-minio Ready

Open the MinIO console in your browser:

http://localhost:9001

Log in with username minioadmin and password minioadmin.

Navigate to the kannika-backup bucket to see the backup data. You should see a directory structure created by Kannika Armory containing the backed-up records from the orders topic.

Alternatively, verify the bucket contents from the command line using the MinIO client:

Terminal window
docker run --rm --network minio --entrypoint /bin/sh quay.io/minio/mc -c \
"mc alias set local http://minio:9000 minioadmin minioadmin > /dev/null && mc ls local/kannika-backup --recursive"

To remove all tutorial resources:

Terminal window
./teardown

In this tutorial, you learned how to:

  1. Configure an S3 Storage resource for MinIO with endpoint and forcePathStyle
  2. Set up AWS-style Credentials for MinIO authentication
  3. Create a Backup that streams Kafka data to S3-compatible storage
  4. Verify backup data in the MinIO console and via the command line

This same configuration works with any S3-compatible storage. To use AWS S3 instead of MinIO, remove the endpoint and forcePathStyle fields and update the Credentials with your AWS access keys. See the S3 Storage reference for details.