What is Kubernetes?

Kubernetes is a container orchestration tool designed for large-scale deployments and high-availability applications. It is an alternative to other container management tools such as Docker swarm mode and Docker Compose.

A kubernetes deployment includes a control plane and a set of compute machines, or nodes. These nodes can be physical or virtual machines and can run a variety of Linux environments.
What is Kubernetes?

Kubernetes is an open source container orchestration platform that makes it easier to manage applications built from multiple, largely self-contained runtimes called containers. Containers have become increasingly popular since Docker launched in 2013, but large applications spread out across many containers are difficult to coordinate at scale.

A working Kubernetes cluster consists of two parts: the control plane and compute machines, or nodes. The control plane takes commands from an administrator (or DevOps team) and relays them to the nodes, which then run containers.

Each pod contains one or more containers, which have a fixed set of resources such as CPU and memory. Kubernetes uses a concept called services to describe how these containers should interact with each other. This includes specifying how a service should be created, started and stopped, updated, or destroyed, as well as managing the connections between a container and other pods in the same node, or between different clusters.

Pods bundle together co-located containers to encapsulate an application. A pod consists of an init container that sets up the environment, a control-plane container, and one or more application containers. Each container is a separate Linux process with its own operating system kernel, but the pod’s ephemeral network identity binds all of the containers together as a single unit.

The controller watches over the state of the pods, and when necessary, the controller replicates or restarts a failed pod. It also evicts pods from cluster nodes when they’re done running.

Pods can access shared storage via volumes, which are directories that all containers in the pod can access. Persistent data stored in a volume can survive reboots, unlike ephemeral data. Pods can also use other means of communication, such as POSIX shared memory and System V semaphores.

The Nodes in Kubernetes are the primary building blocks that execute your applications’ containers smoothly. This is what makes it so easy for organizations to manage high-scalability, self-healing, and zero downtime deployments.

A Node is a physical or virtual machine in a Kubernetes cluster that runs the kubelet and container runtime components. It’s a managed object by the control plane and it provides a platform for running your workload.

Each Node is assigned Pods by the scheduler. When a new Pod is created, the scheduler looks for an ideal Node for the Pod and assigns it to that Node. This is based on factors like resource requirements, policy constraints, affinities and anti-affinities specifications, and data locality. You can get information about the Nodes in a Kubernetes cluster by running the kubectl get nodes command.

Controllers are control loops that continuously compare the current state of a resource with its desired state. They then make changes to bring the actual state closer to the desired state. This is like a car’s cruise control, which constantly monitors its actual speed to ensure it stays as close as possible to its set speed of 65 mph.

Each controller uses a unique mechanism to keep track of its internal objects. Some use a SharedInformer, which is a simple, lightweight shared cache that reduces memory overhead and prevents duplicate retrieval of the same object.

Other controllers need to watch external objects outside the Kubernetes cluster. For example, the Job controller watches for one-time jobs and creates Pods to do them. Once the work is done, the Job controller updates the job object to mark it Finished.

Kubernetes manages the discovery and routing of application components in a loosely coupled manner. This is done by creating Services in the cluster, exposing them with various protocols and providing ready endpoints for external clients to connect to.

These endpoints are defined in YAML or JSON, just like any other Kubernetes object manifest. The set of Pods targeted by a Service is usually determined by the selector, which is a label that can be added at creation time or later on.

There are three different types of Services: ClusterIP, NodePort and LoadBalancer. Each type has its own nuances, but the principle is the same: they are designed to enable Pods to be easily connected to each other. For instance, a client can call the NodePort service by specifying the node port value, which is a static port allocated to each member Pod.