To know about all things Digitisation and Innovation read our blogs here.
Exploring The Key Processes Running In (Kubernetes) K8s Cluster
SID Global Solutions
20 March 2023
Kubernetes, also known as K8s, is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It is an important tool for DevOps teams as it allows for efficient management of containerized applications in a cluster environment. In this guide, we will explore the key processes running in a Kubernetes cluster and how they work together to ensure the smooth operation of applications.
Before we dive into the key processes, it’s important to understand the overall architecture of a Kubernetes cluster. A Kubernetes cluster consists of a master node and worker nodes. The master node is responsible for managing the cluster and the worker nodes are responsible for running the containers that make up the application. Each node can have multiple containers running on it.
The Kubernetes API Server
The Kubernetes API Server is the central component of the Kubernetes cluster. It exposes the Kubernetes API, which is used by the other components to communicate with each other. The API server also serves as the gateway for external users to interact with the Kubernetes cluster.
The Kubernetes API Server is responsible for the following processes:
- Authenticating and authorizing requests from users and components.
- Validating and processing requests.
- Updating the Kubernetes object store with the current state of the cluster.
The Kubernetes Object Store: The Kubernetes object store is a database that stores the state of the cluster. It contains a record of all the objects in the cluster, including pods, services, and replication controllers. The object store is used by the Kubernetes API Server and other components to keep track of the current state of the cluster.
The Kubernetes Scheduler: The Kubernetes Scheduler is responsible for scheduling pods to run on worker nodes. It takes into account the resource requirements of the pods and the available resources on the worker nodes when making scheduling decisions.
The Kubernetes Controller Manager: The Kubernetes Controller Manager is responsible for managing the controllers in the Kubernetes cluster. Controllers are responsible for maintaining the desired state of objects in the cluster. For example, the Replication Controller ensures that a specified number of pod replicas are running in the cluster.
The Kubernetes Etcd Store: The Kubernetes Etcd Store is a distributed key-value store that is used by the Kubernetes object store to store the current state of the cluster. It is a highly available and fault-tolerant store that ensures that the data is always available to the Kubernetes API Server.
The Kubernetes kubelet: The Kubernetes kubelet is responsible for running containers on worker nodes. It communicates with the Kubernetes API Server to get the current state of the cluster and ensures that the containers are running as expected.
The Kubernetes kube-proxy: The Kubernetes kube-proxy is responsible for managing network traffic between pods in the cluster. It sets up rules in the host machine’s firewall to route traffic between pods and exposes services to the outside world.
Kubernetes Pods: A Kubernetes pod is the smallest unit of deployment in a Kubernetes cluster. It contains one or more containers and is scheduled to run on a worker node in the cluster. Pods are ephemeral and can be created, updated, or destroyed as needed.
When a pod is created, the kubelet on the worker node pulls the container image from a container registry and starts the container. The kubelet also sets up networking for the pod, including assigning it an IP address and creating a network namespace for the pod.
Kubernetes Services: A Kubernetes service is a way to expose a set of pods as a network service. Services allow pods to communicate with each other and with the outside world. Services can be exposed internally within the cluster or externally to the internet.
When a service is created, the Kubernetes kube-proxy sets up a load balancer that routes traffic to the pods associated with the service. The load balancer ensures that traffic is distributed evenly across the pods.
Kubernetes Deployments: A Kubernetes deployment is a higher-level abstraction that allows you to manage the desired state of pods in the cluster. Deployments ensure that a specified number of replicas of a pod are running at all times. Deployments are useful when you need to update the version of your application or scale the number of replicas.
When a deployment is created, it creates a new ReplicaSet that manages the desired state of the pods. The ReplicaSet ensures that the specified number of replicas of the pod are running at all times. When a new version of the application is deployed, the deployment updates the ReplicaSet with the new version, and the ReplicaSet creates new pods with the updated version and scales down the old pods.
Kubernetes ConfigMaps: Kubernetes ConfigMaps are used to store configuration data that is needed by your application. ConfigMaps are a way to separate configuration data from your application code, making it easier to manage and update the configuration. ConfigMaps can be created from files, directories, or literals.
When a ConfigMap is created, it is stored in the Kubernetes object store. The ConfigMap can then be mounted as a volume in a pod and used by the application.
Kubernetes Secrets: Kubernetes Secrets are used to store sensitive data, such as passwords and API keys, that is needed by your application. Secrets are similar to ConfigMaps, but they are encrypted at rest and in transit. Secrets can be created from files, directories, or literals.
When a Secret is created, it is stored in the Kubernetes object store. The Secret can then be mounted as a volume in a pod and used by the application.
Kubernetes Ingress: Kubernetes Ingress is a way to expose HTTP and HTTPS services from outside the cluster to services within the cluster. Ingress is used to route traffic to different services based on the URL path or host header. Ingress can also be used to provide TLS termination for HTTPS traffic.
When an Ingress is created, it creates a load balancer that routes traffic to the specified services based on the URL path or host header. Ingress can also be configured to provide TLS termination for HTTPS traffic.
Kubernetes Helm: Kubernetes Helm is a package manager for Kubernetes that allows you to easily install, upgrade, and manage Kubernetes applications. Helm uses charts, which are a collection of files that describe a set of Kubernetes resources. Charts can be customized to meet the specific needs of your application.
When a chart is installed, Helm creates the necessary Kubernetes resources, such as deployments, services, and configmaps. Helm also allows you to manage the configuration of the application using values files.
Kubernetes is a powerful tool for DevOps teams that allows for efficient management of containerized applications in a cluster environment. By understanding the key processes running in a Kubernetes cluster, you can better manage and troubleshoot your applications.