Azure Kubernetes Chronicles: Container Network Interfaces
In today’s cloud landscape, Kubernetes has become one of the backbones of modern application development, empowering businesses to deploy, scale, and manage containerized applications effortlessly. But with great power comes great complexity! That’s where Azure Kubernetes Service (AKS) comes into play — a managed Kubernetes service designed to make your container orchestration journey smoother and more productive.
This blog series, Azure Kubernetes Chronicles, is here for platform engineers, CloudOps professionals and Cloud architects who are beginning their Kubernetes journey or looking to streamline their operations to guide you through the essentials and beyond, as we dive into some of the tools, features, and best practices for leveraging Azure Kubernetes to its fullest potential. In this first part the focus lies on the Container Network Interface (CNI).
· What is a CNI?
· CNI Plugins
· Azure Kubernetes Services and CNI
· Setting Up and Testing Your CNI Configuration
· Conclusion
What is a CNI?
The CNI (Container Network Interface) is a cloud-native standard for managing container networking. It provides a specification and framework for configuring network interfaces in Linux containers, ensuring that containers can communicate with each other, services within the cluster, and external systems.
CNI is not specific to Kubernetes — it is a general-purpose solution — but it has become a cornerstone of Kubernetes networking. Kubernetes relies on the CNI standard to abstract network setup and management.
In Kubernetes, each Pod (the smallest deployable unit) operates in its own network namespace, requiring:
- An IP address to communicate with other pods and services.
- Routing rules to enable East-West (intra-cluster) and North-South (external) traffic.
The CNI standard ensures that these networking needs are consistently met across different infrastructure providers and environments.
How does CNI framework integrates with Kubernetes:
The CNI Specification:
- The CNI defines a simple contract between the container runtime (e.g., Docker, containerd) and a network plugin.
- It uses two primary operations:
- Add: Attach a network interface to a container when it is created.
- Delete: Remove the interface when the container is destroyed.
The Kubernetes Networking Model:
Kubernetes imposes certain networking requirements, and CNI plugins help satisfy them:
- Pod-to-Pod Communication: Every pod should be able to reach every other pod without NAT.
- Pod-to-Service Communication: Pods must be able to connect to Kubernetes services.
- Cluster External Access: Pods and services must be accessible from outside the cluster.
Interaction with kubelet
- When Kubernetes creates a pod, the kubelet invokes the CNI plugin to configure networking.
- The plugin handles IP allocation, DNS configuration, and routing setup.
How CNIs Handle Key Networking Aspects
IP Address Management:
CNI’s assign IP addresses to pods using either:
- Host-local methods (local IP pools).
- IPAM plugins (IP Address Management systems).
Routing:
CNIs configure routing tables to ensure traffic flows correctly between pods, nodes, and external systems.
Network Policies
- Kubernetes Network Policies define rules for pod communication.
- CNI plugins (e.g., Calico, Cilium) enforce these policies at runtime.
Service Discovery and DNS
- CNIs integrate with Kubernetes CoreDNS to handle service discovery for pods.
CNI Plugins
Different CNI plugins are available, catering to various use cases. Here’s an in-depth look at popular options:
Calico
Type: Layer 3 networking and network policy engine.
Features:
- Supports advanced network security policies.
- Can operate in both overlay and non-overlay modes.
- Integrates with eBPF for performance optimization.
Use Case: Enterprises requiring fine-grained network policies and scalability.
Flannel
Type: Simple overlay network.
Features:
- Lightweight and easy to set up.
- Uses VXLAN for encapsulation.
- Minimalist compared to other CNIs.
Use Case: Small-to-medium-sized clusters with straightforward networking needs.
Cilium
Type: Layer 3 networking with eBPF.
Features:
- High observability and security.
- Granular traffic control using eBPF.
- Advanced load balancing and service mesh integrations.
Use Case: Modern microservices architectures prioritising performance and security.
Azure Kubernetes Services and CNI
Microsoft offers different CNI option to suppor the Azure Kubernetes Service: Azure CNI, Azure CNI Overlay, Azure CNI Powered by Cilium and BYOCNI (Bring Your Own CNI). In this post, we’ll explore: Azure CNI, Azure CNI Overlay, and Kubenet. We will dive deeper in Azure CNI Powered by Cilium in our next post.
Azure CNI
Azure CNI (Container Networking Interface) is the default networking option for AKS. It provides a seamless integration with Azure Virtual Network (VNet), ensuring each pod gets its own IP address from the subnet associated with the AKS cluster.
Key Features
- Full VNet Integration: Pods are directly assigned IPs from the Azure VNet, allowing native communication with other Azure resources.
- Security and Compliance: Azure policies and NSGs (Network Security Groups) can be applied at the pod level for fine-grained control.
- Scalability: Ideal for workloads requiring high throughput and low latency.
Use Cases
- Enterprises with stringent compliance or security requirements.
- Workloads that need direct integration with other Azure services like Azure SQL Database or Storage Accounts.
- Scenarios requiring large-scale, high-performance applications.
Challenges
- IP Exhaustion: Each pod consumes an IP address from the VNet, which can lead to IP exhaustion in large clusters.
- Complexity in Subnet Management: Requires careful planning of subnet sizes, especially in high-density environments.
Azure CNI Overlay
Azure CNI Overlay is a newer addition designed to address the limitations of Azure CNI, particularly around IP exhaustion. Instead of assigning each pod an IP directly from the VNet, it uses an overlay network to assign pod IPs.
Key Features
- Overlay Networking: Pods are assigned IPs from a different address space (an internal overlay network), conserving VNet IP addresses.
- Efficient Resource Utilization: Supports larger cluster sizes without requiring extensive subnet planning.
- High Performance: Optimized for low latency and high throughput workloads.
Use Cases
- Scenarios where subnet IP exhaustion is a concern.
- High-density workloads with a need for more pods per node.
- Teams looking for simplified IP address management.
Challenges
- Overlay Overhead: Introduces slight overhead due to encapsulation, which may marginally affect network latency.
- Limited Adoption: As a newer option, it may require additional testing for niche use cases.
Restrictions
- You can’t use Application Gateway as an Ingress Controller (AGIC) for an Overlay cluster.
- You can’t use Application Gateway for Containers for an Overlay cluster.
- Virtual Machine Availability Sets (VMAS) aren’t supported for Overlay.
Kubenet
Kubenet is a basic CNI option that relies on Kubernetes’ built-in network components. It configures pod networking using NAT (Network Address Translation) and route tables.
Key Features
- Simple Architecture: Minimal configuration and dependencies.
- Low IP Consumption: Pods communicate using NAT, which doesn’t require assigning individual IPs from the VNet.
- Cost Efficiency: Suitable for smaller clusters and test environments.
Use Cases
- Development or testing environments with limited networking requirements.
- Scenarios with small-scale, low-performance workloads.
Challenges
- Limited Integration: No direct integration with Azure services, as pods don’t get their own VNet IPs.
- Manual Route Management: Requires explicit route table configuration for pod communication.
- Unlike Azure CNI clusters, multiple kubenet clusters can’t share a subnet.
- AKS doesn’t apply Network Security Groups (NSGs) to its subnet and doesn’t modify any of the NSGs associated with that subnet.
- Scalability Constraints: Less suitable for large-scale or complex applications.
Comparison of CNI Options
Key Considerations When Choosing a CNI
When selecting a CNI for your AKS cluster, consider the following factors:
- Cluster Size and Density: Azure CNI Overlay is a better fit for high-density clusters, while Azure CNI suits mid-sized clusters with integration needs.
- Integration with Azure Resources: If direct communication with Azure services is critical, Azure CNI is the preferred option.
- IP Management: Azure CNI Overlay is ideal for scenarios where IP exhaustion is a concern.
- Performance Requirements: For workloads requiring high throughput and low latency, Azure CNI and Azure CNI Overlay are better suited than Kubenet.
Setting Up and Testing Your CNI Configuration
Introduction to the Demo
In this section, we’ll walk through a practical demonstration of setting up an AKS cluster with Azure CNI and testing its networking configuration. This hands-on approach will help strengthen your understanding of CNIs and their integration with Kubernetes on Azure.
Create the Resource Group
az group create --name AKS-blog-rg --location westeurope
Create the cluster
Use the parameter— network-plugin azure to create the cluster with the network configured. After deployment get the credentials.
az aks create \
--resource-group AKS-blog-rg \
--name aks-cluster \
--network-plugin azure \
--generate-ssh-keys
az aks get-credentials --resource-group AKS-blog-rg --name aks-cluster
Now we have downloaded the credentials, we can check the health status of the nodes, pods and services that are installed.
Deploy a Sample Application
Deploy a simple application in your cluster for testing, use a utility pod that has them pre-installed and verify that the pod is running.
kubectl run busybox --image=busybox --restart=Never --command -- sleep 3600
kubectl get pods -n default
Test Pod-to-Pod Communication
To verify that pods can communicate with each other:
Deploy a service and expose the service on port 80
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --target-port=80 --name=nginx-service --type=ClusterIP
Testing with wget
Retrieve the ClusterIP of the service
kubectl get svc nginx
Note the ip adress of the service (e.g. 10.0.0.150).
Use wget inside the busybox pod to send a request. Replace <ClusterIP> with the adress from the previous step. You should see the default NGINX welcome page HTML content.
kubectl exec -it busybox -- sh
wget -qO- http://<ClusterIP>:80
Validate External Access
To validate if your application is accessible externally:
Deploy a sample nginx
application if not already done.
kubectl create deployment nginx --image=nginx
Expose the deployment using a LoadBalancer service.
kubectl expose deployment nginx --port=80 --target-port=80 --name=nginx-service --type=LoadBalancer
Retrieve the external IP assigned by the LoadBalancer
kubectl get service nginx-service
Access the application in your browser or using wget
wget http://<external-ip> -q -O -
Replace <external-ip> with the external IP obtained in the previous step. You should see the application’s response, indicating that external access is working correctly.
Conclusion
Container Network Interfaces (CNIs) form the backbone of networking in Kubernetes clusters, ensuring seamless communication within and beyond the cluster. Whether you prioritize performance, scalability, or simplicity, Azure Kubernetes Service offers a range of CNI options tailored to meet diverse workload needs.
In the next episode of the Azure Kubernetes Chronicles, we’ll explore the power of eBPF in relation to networking. Stay tuned!
Want to know more about what we do?
We are your dedicated partner. Reach out to us.