Kubernetes architecture and components overview

2
2019
Kubernetes architecture and components overview
Kubernetes architecture and components overview

As a part of the Kubernetes tutorials for beginners series, my goal is to offer a full and solid list of articles that goes through the very basics definitions, the history and the need to use Kubernetes and containers until i reach the deep parts, so regardless of your technical background, i will offer you everything you need here to master Kubernetes step by step.

So today we’ll get a deep Kubernetes architecture and components overview, we’ll discuss how those components works together, how with just a simple deployment file and an occasional command, Kubernetes manages the deployment, scale, partitioning, distribution, and availability of containerized applications.

To understand how Kubernetes deliver the potential of a future-ready, solid and scalable solution for handling in-production, containerized applications, it is helpful to get a sense of how it is designed and organized at a high level. Kubernetes can be visualized as a system built in layers, with each higher layer abstracting the complexity found in the lower levels.

Kubernetes architecture


Here you can see the standard Master-Slave architecture for running an application containers with Kubernetes.

kubernetes architecture
Kubernetes architecture

kubectl is a command line tool that interacts with kube-apiserver and send commands to the master node. Each command is converted into an API call.

Master node

It is responsible for the management of Kubernetes cluster. This is the entry point for all administrative tasks. The master node manages cluster’s workload and directs communication across the system. It consists of various components, each has its own process that can run both on a single master node or on multiple masters.

Master Components

The various components of the Kubernetes control plane (master) are:

kube-apiserver (API server)

The API server is a key component and serves the Kubernetes API using JSON. The API server is the entry point for all the REST commands used to control the cluster. It processes the REST requests, validates them, and executes the bound business logic. Kubeconfig is a package along with the server side tools that can be used for communication. It exposes Kubernetes API.

kube-controller-manager (Controller manager)

This component is responsible for most of the collectors that regulates the state of cluster and performs a task. In general, it can be considered as a daemon which runs in nonterminating loop and is responsible for collecting and sending information to API server. It works toward getting the shared state of cluster and then make changes to bring the current status of the server to the desired state. The key controllers are replication controller, endpoint controller, namespace controller, and service account controller. The controller manager runs different kind of controllers to handle nodes, endpoints, etc.

kube-scheduler

The deployment of configured pods and services onto the nodes is done by the scheduler. Kube-scheduler tracks resource utilization on each node to ensure that workload is not scheduled in excess of the available resources. For this purpose, the scheduler must know the resource requirements, resource availability and a variety of other user-provided constraints. In other words, this is the mechanism responsible for allocating pods to available nodes. The scheduler is responsible for workload utilization and allocating pod to new node.

etcd

etcd is a simple, distributed, consistent and lightweight key-value data store. It stores the configuration data of the cluster, representing the overall state of the cluster at any given time instance. It is mainly used for shared configuration and service discovery, also it’s accessible only by Kubernetes API server as it may have some sensitive information.

Kubernetes nodes (worker nodes and previously minion)

The pods are deployed on Kubernetes nodes, so the worker node contains all the necessary services to manage the networking between the containers, communicate with the master node and assign resources to the containers scheduled. Every node in the cluster must run the container runtime (such as Docker), as well as the node components.

Node components

Kubelet

Kubelet service is a small service in each node, It interacts with etcd store to read configuration details and wright values to get the configuration of a pod from the API server and ensures that the described containers are up and running. It takes care of starting, stopping, and maintaining pods as directed by the master. It is also responsible for communicating with the master node
to get information about services and write the details about newly created ones. It manages network rules, port forwarding, etc too.

cAdvisor

cAdvisor monitors and collects resource usage and performance metrics of CPU, memory, file and network usage of containers on each node.

Kube-Proxy

This is a proxy service which runs on each node and helps in making services available to the external host. It helps in forwarding the request to correct containers by handling the routing of TCP and UDP packets of the appropriate container based on IP and port number of the incoming request and is capable of performing primitive load balancing. It makes sure that the networking environment is predictable and accessible and at the same time it is isolated as well. It manages pods on node, volumes, secrets, creating new containers’ health checkup, etc.

Addons (extra components)

Dashboard (optional)

Kubernetes’ web UI that simplifies the Kubernetes cluster user’s interactions with the API server.

DNS

While the other addons are not strictly required, all Kubernetes clusters should have cluster DNS, as many examples rely on it. Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services. Containers started by Kubernetes automatically include this DNS server in their DNS searches.

Container Resource Monitoring

Container Resource Monitoring records generic time-series metrics about containers in a central database, and provides a UI for browsing that data.

Cluster-level Logging

Cluster-level logging mechanism is responsible for saving container logs to a central log store with search/browsing interface.

How Kubernetes components works together

At it’s base, Kubernetes brings together individual physical or virtual machines into a cluster using a shared network to communicate between each server. This cluster is the physical platform where all Kubernetes components, capabilities, and workloads are configured.

The machines in the cluster are each given a role within the Kubernetes ecosystem. One server (or a small group in highly available deployments) functions as the master server. This server acts as a gateway and brain for the cluster by exposing an API for users and clients, health checking other servers, deciding how best to split up and assign work (known as “scheduling”), and orchestrating communication between other components. The master server acts as the primary point of contact with the cluster and is responsible for most of the centralized logic Kubernetes provides.

The other machines in the cluster are designated as nodes: servers responsible for accepting and running workloads using local and external resources. To help with isolation, management, and flexibility, Kubernetes runs applications and services in containers, so each node needs to be equipped with a container runtime (like Docker or rkt). The node receives work instructions from the master server and creates or destroys containers accordingly, adjusting networking rules to route and forward traffic appropriately.

As mentioned above, the applications and services themselves are run on the cluster within containers. The underlying components make sure that the desired state of the applications matches the actual state of the cluster. Users interact with the cluster by communicating with the main API server either directly or with clients and libraries. To start up an application or service, a declarative plan is submitted in JSON or YAML defining what to create and how it should be managed. The master server then takes the plan and figures out how to run it on the infrastructure by examining the requirements and the current state of the system. This group of user-defined applications running according to a specified plan represents Kubernetes’ final layer.

 

So we got a dive on the Kubernetes architecture and it’s components and we got an idea why it becomes the most popular container orchestration engine and why it has the potential to support enterprise-scale software/container management.

In the fourth blog post in this series, we’ll talk about Kubernetes concepts and some other important abstractions and definitions.

See you! 😀

2 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here