top of page

Share Post

How is a stretched Kubernetes cluster used in the telecommunications industry? (VMware TCP 5G)

  • Writer: Kiruba Karan
    Kiruba Karan
  • Sep 28, 2023
  • 3 min read

Updated: May 17, 2024

Let's simplify and focus only on Kubernetes infrastructure. Consider a virtual machine running on VMware ESXi that serves as a Kubernetes worker node, enabling container deployments in a radio access network (RAN).


In this setup, the control nodes for the Kubernetes clusters are located in the Regional Data Center, serving as the management layer. Meanwhile, the worker nodes are placed in another endpoint (vCenter) within the RDC.


Kubernetes Control Plane Components


1. Kube-api - The API server acts as the primary interface for the Kubernetes control plane.

2. Etcd - This is a reliable and always accessible key-value storage system that serves as the foundation for all cluster data in Kubernetes.

3. Controller manager - The control loop monitors the cluster's state and implements changes to move the current state towards the desired state.

4. Scheduler - This control plane element monitors newly created Pods without an assigned node and chooses a node for them to operate on.

5. Kube-VIP - To manage the ingress for the control-plane APIs. This is used only between the control-plane nodes and not as an external ingress/load balancer for other CNFs.


Worker Node Components


1. Kubelet - Each node in the cluster has an agent that ensures the proper execution of containers within a Pod.

2. Kube-proxy - The kube-proxy is responsible for managing network rules on nodes. These rules enable smooth network communication between your Pods and network sessions within or outside your cluster.

3. Container runtime - The container runtime is the software that is responsible for running containers. Kubernetes supports container runtimes such as containerd, and CRI-O.


Breaking down an implementation (refer to the image).

  1. In vCenter-1, the Kubernetes control plane components are deployed. This includes essential components such as kube-api, etcd, controller manager, scheduler, and kube-vip, along with other required add-ons.

  2. The control-plane nodes must be an odd number (typically 1, 3 or 5) to be able to maintain a quorum, predominately these are deployed as 3 nodes.

  3. At vCenter-2, we distribute worker nodes across regional endpoints, ensuring they run under the same vCenter. Each node runs the distributed unit (du) as a pod.

  4. The sizing of the worker node is generally based on the number of PODs per node, however, in the case of Telco, the worker node sizing is generally based on the requirements recommended by the CNF (Cloud-Native Network Function) vendor.

  5. When a service provider wants to deploy a container for telco workload (DU as an example), the request is sent to the scheduler using kube-api. The scheduler then determines the valid node for each pod in the scheduling queue according to constraints and available resources.

  6. The process of deploying the du pods to the appropriate worker node in the market involves the use of a ‘node-selector’ constraint used in the deployment yaml file. This constraint ensures that the du pods are deployed to the correct worker node in the specific market.

  7. There are other scheduling options such as taints and tolerations, however, in general node labels / nodeselectors are by far the most common method to control deployments.

  8. Once the deployment is successful, the DU pods will be up and running, and with the necessary add-ons and essential components, our site will be ready to come alive. This final step transforms the infrastructure into a fully functional site, capable of delivering the desired service.



 
 
 

תגובות


Never Miss a Post. Subscribe Now!

Thanks for submitting!

bottom of page