Welcome back to the Azure Kubernetes Chronicles! In previous episodes, we explored network-related topics like container network interfaces, eBPF, and observability in Azure Kubernetes Service (AKS). But securing an AKS cluster goes beyond networking and runtime security — it extends to protecting, recovering, and managing data.

In this episode, we’re shifting gears to focus on a topic that’s equally critical but often underestimated: data protection in AKS. While Kubernetes brings agility and scalability, it also introduces complexities in securing persistent storage, managing sensitive secrets, and ensuring compliance with evolving regulations like NIS2 and DORA.

Whether you’re dealing with accidental data loss, ransomware threats, or compliance challenges, having a robust backup, recovery, and encryption strategy is non-negotiable. Misconfigurations, untested restore procedures, and reliance on default security settings can quickly turn an operational hiccup into a full-blown crisis.

· Why Data Protection Matters in Azure Kubernetes
· Common Pitfalls in AKS Data Protection
· NIS2 and DORA: What They Mean for Data Protection in AKS
· Solutions: Backup and Restore Strategies for AKS
· Conclusion

Why Data Protection Matters in Azure Kubernetes

In today’s digital landscape, data breaches have become an all-too-common occurrence, with numerous incidents reported daily around the globe. The frequency of these attacks is alarming, highlighting that anyone can fall victim, regardless of their size or industry. The critical question has shifted from “if” a data breach will occur to “when” it will happen. As we navigate this new reality, understanding the implications and taking proactive measures is essential for safeguarding our valuable information.

Kubernetes workloads often rely heavily on persistent storage, databases, and secrets management. If these components aren’t properly secured or backed up, attackers have a clear path to valuable data.

And let’s not forget compliance. If your organization handles regulated information — think healthcare, finance, or personal data under GDPR — then solid data protection practices aren’t optional; they’re mandatory. Misconfigured AKS clusters can inadvertently expose sensitive data, risking not just reputation damage but legal and financial penalties, too.

Common Pitfalls in AKS Data Protection

A big misconception around Azure Kubernetes Service (AKS) is that it automatically covers all data protection bases. Spoiler: it doesn’t. AKS has some great built-in security features, but leaving things on default can quickly backfire. Misconfigured persistent storage, skipping encryption, or neglecting proper RBAC (Role-Based Access Control) setups are all easy ways to accidentally expose your cluster’s data.

Another pitfall is mishandling of Kubernetes secrets. Kubernetes secrets are base64 encoded — not encrypted — by default. It’s way too common to see sensitive info like API keys or passwords sitting openly in ConfigMaps or static environment variables, often without proper rotation policies. This kind of shortcut leaves your systems wide open if anyone breaches your cluster.

Backup strategies also frequently miss the mark. Many teams trust cloud-provider snapshots without realizing these might not be application-aware or reliable enough for critical data. Without a tested and validated restore plan, backups become a false sense of security, and recovery can turn into chaos exactly when you need calm.

 

NIS2 and DORA: What They Mean for Data Protection in AKS

Regulations aren’t just legal hurdles — they’re here to make sure we don’t wake up one day to find our critical data lost or stolen. In the world of Kubernetes, particularly in Azure Kubernetes Service (AKS), data protection is more than just ticking compliance boxes; it’s about making sure your workloads are resilient, secure, and recoverable.

Two key regulations are shaping how organizations approach data security in the EU: NIS2 (Network and Information Security Directive 2) and DORA (Digital Operational Resilience Act). While they target different industries, they both push for stronger cybersecurity, backup strategies, and disaster recovery plans — which are must-haves for anyone running Kubernetes in production.

What NIS2 Means for Your AKS Setup

You need to have solid backup and recovery plans. If something goes wrong — be it a cyberattack, a misconfiguration, or accidental deletion — you must restore your workloads quickly.

Incident response and reporting are a must. If an attack or outage happens, NIS2 requires that it be reported promptly.

Data security (encryption and access control) needs to be rock solid. NIS2 mandates the proper protection of sensitive data, meaning no more storing secrets in plain text.

What DORA Means for AKS in Financial Services

You can’t just “hope” your backups work — you need to test them. DORA mandates regular testing of disaster recovery plans to prove that your backups are useful.

Third-party cloud risks need to be managed. Financial institutions using cloud services must ensure that their providers meet resilience and security requirements.

Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO) matter. DORA forces organizations to define how quickly they can recover from a failure (RTO) and how much data loss is acceptable (RPO).

NIS2 and DORA aren’t just another set of bureaucratic rules — they’re a wake-up call to take data protection seriously. Whether you’re managing cloud services under NIS2 or financial workloads under DORA, your AKS backup and recovery strategy is critical.

Solutions: Backup and Restore Strategies for AKS

When you’re protecting your data in AKS, having a solid backup and restore strategy is essential. Start by using Kubernetes-native tools like Velero, which are built specifically to work smoothly with Kubernetes clusters. Velero helps automate and manage backup operations, including snapshots of cluster states, resource definitions, and persistent volume data. It is easy to schedule regular backups, configure custom backup schedules, and even integrate hooks for database consistency.

If there is a need for more enterprise-level features and more management options, then solutions like Veeam Kasten K10 is a good choice. These tools offer advanced capabilities like policy-driven backup automation, incremental backups, application-aware snapshots, and intuitive dashboards that simplify management and reporting. They also handle backups across multiple clusters and clouds, making them ideal for complex environments.

These backup tools cover all the bases by backing up both cluster resources and persistent volumes, ensuring comprehensive protection. Plus, they simplify the entire restore and disaster recovery process, making it easier when things inevitably go sideways. With features like point-in-time recovery and granular restores, you can recover exactly what you need, quickly and accurately.

First, we look at how Velero is installed on an Azure Kubernetes Cluster. After that, we take a quick look at Veeam Kasten.

Install Velero

Create an Azure storage account to store the backups from Velero

# Create Storage Account
az storage account create \
  --name blogvelerobackup \
  --resource-group aks-blog-rg \
  --sku Standard_LRS \
  --kind StorageV2 \
  --access-tier Hot \
  --location westeurope \
  --default-action Allow


Velero consists of a client part, Velero cli, and a server part. Velero cli is used to manage the Velero server installation. We start by installing the Velero cli.

# for macOS (Homebrew)
brew install velero

# direct binary download
VELERO_VERSION=v1.13.2
wget https://github.com/vmware-tanzu/velero/releases/download/$VELERO_VERSION/velero-$VELERO_VERSION-darwin-amd64.tar.gz
tar -xvf velero-$VELERO_VERSION-darwin-amd64.tar.gz
sudo mv velero-$VELERO_VERSION-darwin-amd64/velero /usr/local/bin/

 

Velero uses a file to authenticate with the storage account. We have to put the Azure Storage Account Key in a credential file.

az storage account keys list \
   --resource-group aks-blog-rg \
   --account-name blogvelerobackup \
   --query '[0].value' -o tsv
vim credentials-velero

Paste the following content into the file

AZURE_STORAGE_ACCOUNT_ACCESS_KEY=<YOUR_STORAGE_ACCOUNT_ACCESS_KEY>

Set the permissions for safety on the credentials-velero file

chmod 600 credentials-velero

Install Velero using the following script

# install-velero.sh
#!/bin/bash

RESOURCE_GROUP=aks-blog-rg
AZURE_STORAGE_ACCOUNT=blogvelerobackup
BLOB_CONTAINER=velero

# Create storage container (using Azure AD login)
az storage container create \
  --name $BLOB_CONTAINER \
  --account-name $AZURE_STORAGE_ACCOUNT \
  --auth-mode login

# Install Velero with Azure plugin
velero install \
  --provider azure \
  --plugins velero/velero-plugin-for-microsoft-azure:v1.7.0 \
  --bucket $BLOB_CONTAINER \
  --secret-file ./credentials-velero \
  --backup-location-config resourceGroup=$RESOURCE_GROUP,storageAccount=$AZURE_STORAGE_ACCOUNT \
  --snapshot-location-config resourceGroup=$RESOURCE_GROUP,subscriptionId=$(az account show --query id -o tsv)

Check the Velero installation

kubectl get pods -n velero

To perform a backup, run the following command

velero backup create --include-namespaces

To restore from the backup use

velero restore create –from-backup

Install Veeam Kasten


kubectl create namespace kasten-io
helm repo add kasten https://charts.kasten.io/
helm repo update
helm install k10 kasten/k10 --namespace kasten-io

Access the Kasten Dashboard via port forwarding

kubectl --namespace kasten-io port-forward service/gateway 8080:8000

By using port forwarding, we can open the Kasten Dashboard on http://localhost:8080.

In the dashboard, you can configure policies to create backups for the applications in the Azure Kubernetes cluster. The dashboard offers a lot of different options to configure the disaster recovery scenarios.

But backups alone aren’t enough. Encryption and secure secret management are also important. Possible options for safely storing your secrets are solutions like Azure Key Vault or HashiCorp Vault. Both of these solutions can integrate with Azure Kubernetes. Next to using a Vault solution, it is also important to configure Role-Based Access Control (RBAC). Make sure that the right permissions are assigned and monitor them regularly.

Injecting secrets from Azure Key Vault into Azure Kubernetes Service (AKS) workloads

  • Set up Azure Key Vault and store a secret
  • Configure AKS to access Key Vault secrets
  • Deploy an application to AKS that uses the injected secret

Set up Azure Key Vault

# Create a Key Vault
az keyvault create --name BlogKeyVault --resource-group aks-blog-rg --location westeurope
# Create role assigment
az role assignment create \
  --role "Key Vault Secrets Officer" \
  --assignee "" \
  --scope "/subscriptions/c9465047-a812-42e7-a53b-739940940898/resourceGroups/aks-blog-rg/providers/Microsoft.KeyVault/vaults/BlogKeyVault"
# Add a secret
az keyvault secret set --vault-name BlogKeyVault --name MySecret --value "SuperSecretPassword"

Connect Azure Kubernetes Service with Azure Key Vault

We will use Azure Workload Identity, a secure and recommended way to authenticate pods to Azure resources.

Enable workload identity on your AKS cluster

az aks update -n aks-blog-cluster -g aks-blog-rg --enable-oidc-issuer --enable-workload-identity

Create an Azure Managed Identity


IDENTITY_CLIENT_ID=$(az identity show -n AKSIdentity -g aks-blog-rg --query clientId -o tsv)
IDENTITY_OBJECT_ID=$(az identity show -n AKSIdentity -g aks-blog-rg --query principalId -o tsv)

az role assignment create \
--role "Key Vault Secrets User" \
--assignee-object-id "$IDENTITY_OBJECT_ID" \
--assignee-principal-type ServicePrincipal \
--scope "$(az keyvault show --name BlogKeyVault --query id -o tsv)"…

Create a Kubernetes service account and link it to the Azure identity


# aks-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: akv-service-account
namespace: default
annotations:
azure.workload.identity/client-id: "<IDENTITY_CLIENT_ID>"
kubectl apply -f aks-sa.yaml

Federated Credential setup

We have enabled workload identity in AKS, so we have to configure a federated identity credential for the earlier created Managed Identity ( AKSIdentity)


# Set variables
AKS_OIDC_ISSUER="$(az aks show -n aks-blog-cluster -g aks-blog-rg --query "oidcIssuerProfile.issuerUrl" -o tsv)"
IDENTITY_NAME="AKSIdentity"
NAMESPACE="default"
SERVICE_ACCOUNT_NAME="akv-service-account"
RESOURCE_GROUP="aks-blog-rg"
SUBSCRIPTION_ID="$(az account show --query id -o tsv)"

 

# Create federated identity credential
az identity federated-credential create \
  --name aks-keyvault-fic \
  --identity-name "$IDENTITY_NAME" \
  --resource-group "$RESOURCE_GROUP" \
  --issuer "$AKS_OIDC_ISSUER" \
  --subject "system:serviceaccount:$NAMESPACE:$SERVICE_ACCOUNT_NAME" \
  --audiences "api://AzureADTokenExchange"

Injecting Secrets into Your Application

We will use the Azure Key Vault Secrets Provider CSI Driver. To be able to use it, we have to enable it in AKS.

az aks enable-addons --addons azure-keyvault-secrets-provider --name aks-blog-cluster --resource-group aks-blog-rg

Create a deployment YAML that injects your secret


apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      serviceAccountName: akv-service-account
      containers:
        - name: my-app-container
          image: nginx
          env:
            - name: SECRET_FROM_KEYVAULT
              valueFrom:
                secretKeyRef:
                  name: my-secret
                  key: MySecret
                volumeMounts:
                  - name: secrets-store-inline
                    mountPath: "/mnt/secrets-store"
                    readOnly: true
                volumes:
                  - name: secrets-store-inline
                    csi:
                      driver: secrets-store.csi.k8s.io
                      readOnly: true
                      volumeAttributes:
                        secretProviderClass: "azure-kvname"
---
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
  name: azure-kvname
spec:
  provider: azure
  parameters:
    usePodIdentity: "false"
    keyvaultName: BlogKeyVault
    tenantId: "983d915c-b881-4803-bf11-a81f8e51e3bc"
    clientID: "b5e1174d-952a-44d3-8651-476a83c2babe"
    objects: |
      array:
        - |
          objectName: MySecret
          objectType: secret
  secretObjects:
    - secretName: my-secret
      type: Opaque
      data:
        - objectName: MySecret
          key: MySecret

Apply the deployment

kubectl apply -f deployment.yaml

Check the name of your pod with kubectl get pods -n default. Exec into your container.

kubectl exec -it -- printenv SECRET_FROM_KEYVAULT

You should see as output

SuperSecretPassword

Another key aspect that often is overlooked is monitoring. Backups aren’t the most exciting thing in the world, so it’s easy to set them up and just assume they’ll work when needed. It is also important to keep an eye on your backup performance and health. Azure Monitor is one of the tools that can do just that by giving you detailed insights into what’s happening behind the scenes. It highlights any anomalies, performance issues, or even backups that aren’t completing as expected.

And, of course, don’t underestimate the importance of testing your restore procedures regularly. It is easy to assume everything is working fine. But there is no worse feeling than confidently heading into a disaster recovery scenario, only to discover the backups were silently failing the whole time. Schedule periodic test restores, make sure the process is smooth, and confirm you can get your data back quickly and reliably. That extra bit of testing can make all the difference when disaster strikes.

Conclusion

Data protection in Azure Kubernetes Service (AKS) isn’t just a compliance checkbox — it’s a critical component of running resilient, secure, and recoverable workloads. Misconfigurations, overlooked backups, and poor secret management can turn small mistakes into major incidents.
By implementing a solid backup and restore strategy with tools like Velero or Veeam Kasten, securing secrets with Azure Key Vault or HashiCorp Vault, and aligning with regulations like NIS2 and DORA, you can ensure your AKS clusters remain protected against both operational failures and security threats.

But don’t stop at implementation — regularly test your restore processes, monitor backup health, and refine your disaster recovery plan. The worst time to find out your backups aren’t working is when you need them the most.

Backups are like seatbelts: you hope you never need them, but you’ll be grateful they’re there when disaster strikes.

Stay tuned for the next episode of Azure Kubernetes Chronicles! 🚀

Want to know more about what we do?

We are your dedicated partner. Reach out to us.