Using CNI Cilium VPC Native Routing
Last updated
Last updated
Address
VNG CorporationCNI (Container Network Interface) Cilium VPC Native Routing is a mechanism that helps Kubernetes manage networks without using overlay networks. Instead of using virtual network layers, CNI Cilium VPC Native Routing leverages the direct routing capabilities of cloud service providers' VPCs (Virtual Private Clouds) to optimize data transfer between nodes and pods in the Kubernetes cluster.
On VKS, CNI (Container Network Interface) Cilium VPC Native Routing operates according to the following model:
In there:
Each Node has a private IP address range for pods (Pod CIDR). Pods in each node use addresses from this CIDR and communicate over the virtual network.
Cilium and eBPF perform network management for all pods on each node, including handling traffic going from pod to pod, or from node to node. When necessary, eBPF performs masquerading to hide the internal IP address of the pod when communicating with the external network.
Cilium ensures that pods can communicate with each other both within the same node and between different nodes.
To be able to initialize a Cluster and Deploy a Workload , you need:
There is at least 1 VPC and 1 Subnet in ACTIVE state . If you do not have any VPC, Subnet, please initialize VPC, Subnet according to the instructions below:
Step 1: Access the vServer homepage at the link https://hcm-3.console.vngcloud.vn/vserver
Step 2: Select the VPCs menu in the left menu of the screen.
Step 3: Here, if you don't have any VPC yet, please select Create VPC by entering the VPC name and defining the desired CIDR/16 range.
Step 4: After having at least 1 VPC, to create a subnet, you need to select View Detail to expand the control panel at the bottom, including the Subnet section.
Step 5: In the Subnet section, select Add Subnet. Now, you need to enter:
Subnet name: the subnet's mnemonic name
Primary CIDR : :This is the primary IP address range of the subnet. All internal IP addresses of virtual machines (VMs) in this subnet will be taken from this address range. For example, if you set Primary CIDR to 10.1.0.0/24, the IP addresses of the VMs will be in the range of 10.1.0.1 to 10.1.0.254.
Secondary CIDR : This is a secondary IP address range, used to provide additional IP addresses or to separate different services within the same subnet. Each Node has a private IP address range for its pods (Pod CIDR). The pods in each node use addresses from this CIDR and communicate over the virtual network.
Attention:
The IP address ranges of Primary CIDR and Secondary CIDR cannot overlap. This means that the address range of Secondary CIDR must be outside the range of Primary CIDR and vice versa. For example, if Primary CIDR is 10.1.0.0/24, then Secondary CIDR cannot be 10.1.0.0/20 because it is within the range of Primary CIDR. Instead, you can use a different address range like 10.1.16.0/20.
There is at least 1 SSH key in ACTIVE state . If you do not have any SSH key, please initialize SSH key following the instructions here .
kubectl installed and configured on your device. please refer here if you are not sure how to install and use kuberctl. In addition, you should not use an outdated version of kubectl, we recommend that you use a kubectl version that is no more than one version different from the cluster version.
Attention:
When using Cilium's native routing mode, it is crucial to configure Security Groups correctly to allow necessary connections. For example, when running an NGINX pod on a node, you must permit traffic on port 80 to ensure requests from other nodes can connect. This configuration is not required when using the network overlay mode.
To initialize a Cluster, follow the steps below:
Step 1: Access https://vks.console.vngcloud.vn/overview
Step 2: On the Overview screen , select Activate.
Step 3: Wait until we successfully initialize your VKS account. After successfully Activating, select Create a Cluster.
Step 4: At the Cluster initialization screen, we have set up the information for the Cluster and a Default Node Group for you. To use CNI Cilium VPC Native Routing for your Cluster , please select:
Network type : Cilium VPC Native Routing and other parameters as follows:
Suppose, when initializing the cluster, I choose:
VPC : 10.111.0.0/16
Subnet:
Primary IP Range: 10.111.0.0/24
Secondary IP Range: 10.111.160.0/20
Node CIDR mask size: Selectable values range from /24 to /26 .
Attention:
Only one networktype: In a cluster, you can use only one of three networktypes: Calico Overlay, Cilium Overlay, or Cilium VPC Native Routing
Multiple subnets for a cluster: VKS supports the use of multiple subnets for a cluster. This allows you to configure each node group in the cluster to be located on different subnets within the same VPC, helping to optimize resource allocation and network management.
Cilium VPC Native Routing and Secondary IP Range : When using Cilium VPC Native Routing for a cluster, you can use multiple Secondary IP Ranges. However, each Secondary IP Range can only be used by a single cluster. This helps avoid IP address conflicts and ensures consistency in network management.
When there are not enough IP addresses in the Node CIDR range or Secondary IP range to create more nodes, specifically:
If you cannot use the new Node because of running out of IP addresses in the Secondary IP range . At this time, new nodes will still be created and joined to the cluster but you cannot use them. The pods that are required to launch on this new node will be stuck in the " ContainerCreating " state because no suitable node can be found to deploy. At this time, you need to create a new node group with a secondary IP range that is not used on any cluster.
Step 5: Select Create Kubernetes cluster. Please wait a few minutes for us to initialize your Cluster, the status of the Cluster is now Creating .
Step 6: When the Cluster status is Active , you can view Cluster information and Node Group information by selecting Cluster Name in the Name column .
Below are instructions for deploying an nginx deployment and testing IP assignment for the pods deployed in your cluster.
Step 1: Access https://vks.console.vngcloud.vn/k8s-cluster
Step 2: The Cluster list is displayed, select the icon Download and select Download Config File to download the kubeconfig file. This file will give you full access to your Cluster.
Step 3 : Rename this file to config and save it to the ~/.kube/config folder
Step 4: Perform Cluster check via command:
Run the following command to check the node
If the result is as below, it means your Cluster is successfully initialized with 3 nodes:
Continue by running the following command to check the pods deployed on your kube-system namespace:
If the result is as below, it means that the pods supporting Cilium VPC Native have been running:
Step 2: Deploy nginx on the newly created cluster:
Initialize the nginx-deployment.yaml file with the following content:
Perform this deployment via command:
Step 3: Check the deployed nginx pods and the IP address assigned to each pod
Perform a check of the pods via the command:
You can observe below, the nginx pods are assigned IPs 10.111.16x.x which satisfy the Secondary IP range and Node CIDR mask size conditions that we specified above:
You can also perform a detailed description of each pod to check this pod information via the command:
Step 4: There are a few steps you can take to thoroughly test the performance of Cilium. Specifically:
First, you need to install Cilium CLI following the instructions here .
After installing Cilium CLS, check the status of Cilium in your cluster via the command:
If the result is displayed as below, it means that Cilium is working properly and fully :
Step 5: You can perform a healthy check to check Cilium in your cluster
Run the following command to perform a healthy check
Reference results
Additionally, you can also perform additional End-to-End connectivity tests or Network performance tests following the instructions at End-To-End Connectivity Testing or Network Performance Test .
Step 6: Check the connection between Pods
Perform a connectivity test between pods, ensuring that the pods can communicate via the VPC IP address without going through overlay networks . For example, below I perform a ping from the pod nginx-app-7c79c4bf97-6v88s with IP address: 10.111.161.53 to a server in the same VPC with IP address: 10.111.0.10:
If the result is as follows, the connection is successful:
Field | Meaning | Illustrative example |
---|---|---|
VPC
The IP address range that the Cluster nodes will use to communicate.
In the picture, we choose VPC with IP range 10.111.0.0/16 , corresponding to 65536 IPs
Subnet
A smaller IP address range belonging to the VPC. Each node in the Cluster will be assigned an IP from this Subnet. The Subnet must be within the IP range of the selected VPC.
In the picture, we choose Subnet with Primary IP range of 10.111.0.0/24 , corresponding to 256 IPs
Default Pod IP range
This is the secondary IP address range used for pods. It is called Secondary IP range because it does not match the primary IP range of the node. Pods in the Cluster will be assigned IPs from this range.
In the picture, we choose Secondary IP range as 10.111.160.0/20 - Corresponding to 4096 IPs for pods
Node CIDR mask size
CIDR size for nodes. This parameter indicates how many IP addresses each node will be assigned from the pod IP range. This size should be chosen to ensure that there are enough IP addresses for all pods on each node. You can refer to the table below to understand how to calculate the number of IP addresses that can be used to allocate to nodes and pods in your cluster.
In the picture, we choose Node CIDR mask size as /25 - Each node will have 128 IP addresses , suitable for the number of pods you want to run on a node.
Node CIDR mask size
Number of IPs per node
Number of nodes that can be created in the /20 range (4096 IPs)
Number of IPs allocated to pods on each node
Actual number of pods that can be created
/24
256
16
256
128
/25
128
32
128
64
/26
64
64
64
32