As a part of the Kubernetes tutorials for beginners series, my goal is to offer a full and solid list of articles that goes through the very basics definitions, the history and the need to use Kubernetes and containers until i reach the deep parts, so regardless of your technical background, i will offer you everything you need here to master Kubernetes step by step.
So as I told you guys, on this article i am going to introduce Kubernetes and why you should use it the simple way cause i found that a lot of beginners will struggle with the default definition on the official Kubernetes website . let’s go! no time to waste!
first and foremost, what is a cluster?
a cluster is a group of servers and other resources that act as a single system and enable high availability and, in some cases, load balancing and parallel processing.
What is Kubernetes?
Kubernetes stylized for k8’s is a powerful open-source system, initially developed by Google, for managing containerized applications in a clustered environment. It aims to provide better ways of managing related, distributed components and services across the varied infrastructure.
Kubernetes, at its basic level, is a system for running and coordinating containerized applications across a cluster of machines. It is a platform designed to completely manage the lifecycle of containerized applications and services using methods that provide predictability, scalability, and high availability.
Why is Kubernetes written in Go? 😕 😕
When kubernetes was developed, the available programmable languages at that time were C, C++, Java, and Python, but they had problems that made them unsuitable for use.
That’s why the developers needed a language which incorporated all the prominent features of the present languages which resulted in the conception of Golang or simply Go programming language in 2007. The biggest feature of Go was that it was neither too high level or low level.
The following features made Go as the ideal programming language for Kubernetes:
- Great set of system Libraries: Go language included a vast set of library functions for almost every functions.
- Fast testing tools: Development is speeded up by the tools used for testing.
- Built-in concurrency: Building distributed systems in Go helped tremendously by being able to sort out and collect network calls easily.
- Garbage Collection: Go language’s garbage collector is a concurrent, tri-color, mark-sweep collector.
- Type safety: Go is a programming language that protects you from many simple buffer overflow bugs.
As a Kubernetes user, you can define how your applications should run and the ways they should be able to interact with other applications or the outside world. You can scale your services up or down, perform graceful rolling updates, and switch traffic between different versions of your applications to test features or rollback problematic deployments. Kubernetes provides interfaces and composable platform primitives that allow you to define and manage your applications with high degrees of flexibility, power, and reliability.
Why should you use Kubernetes?
You should use Kubernetes simply cause simply he is the most complete container orchestration engine, this is what Kubernetes offers to you:
Deploying an application with Kubernetes requires just a single command. In the background, Kubernetes creates the runtime environment, requests the needed resources, handles the launch of the services, and provides each with an IP address. It also scales the containers across the cluster until each service is deployed to the level requested and maintains these levels 24/7.
You decide how many clones of each service are needed. Because the services are containerized, you can set different levels for different parts of the app. When you first deploy, you calculate some starting numbers for each service. Kubernetes makes sure each service is running the correct number of copies. If there are too few, it will launch more. If there are too many, it will kill a few until the correct number are running.
Suppose you determine that there are too many copies of a service running and they are sitting dormant, or that application usage has increased and you need more copies to handle the load. You can change the settings on the deployment file, redeploy, and Kubernetes will update the number of each running service to meet the new requirements.
Kubernetes watches how many copies of each service are up. If a container has a failure and goes down, Kubernetes launches a new copy. Kubernetes continually verifies that the number of each service on the system matches what was requested.
If an entire server goes down, Kubernetes redeploys the missing containers on other nodes, again until the number of services running matches the defined limits. You can rest assured that your app will achieve the required six nines of availability, as long as your data center is active.
Kubernetes continuously monitors the usage of containers across nodes, verifying that the work is evenly distributed. If it finds an underused container or resource, it moves work to that resource, and may even move copies of a container to underused hardware.
When applications are broken into microservices, the individual services need to talk to each other, in order to pass along client information. Kubernetes creates a service within itself to enable the different microservices to communicate. This communication service determines which containers can use it, based on labels on the container, and then defines a port that can be used by any container with that label.
As a service reads data from a wearable device on a customer, it will pass that data to the other services in the app that will stream the data, authenticate it with the health-care provider, and so on. Each instance of any service can use the same port to communicate with the other microservices in the app or any other services on the cluster that it needs.
The communication service in Kubernetes is persistent, independent of the services that use it. If a container goes down or a new container is spun up, the service will continue to be available at its port to any application with the correct label.
Let’s consider the example of a health-monitoring application, serving thousands of users, sending data to a variety of health-care providers. With Kubernetes, the services could be divided up by health-care provider. Each provider could offer a differing number of services, based on usage, or could even provide variations on a service to a client, based on that client’s particular needs.
For example, say that this application spins up three copies of the app for users of Mega-Health, but provides four copies to Health R Us because they have a larger customer base. In addition, Health R Us uses a communication protocol different from Mega-Health – so, a separate microservice is used to connect to their system.
When an application update is ready to roll out, the Kubernetes deployment file needs to be updated with the new information.
Kubernetes will gradually kill existing containers with the current version of the app and spin up new containers with the updated version, until all containers for that service are running the new version.
If there is a problem along the way, you can roll back the upgrade with a single command. Kubernetes will gradually kill containers with the new 2.0 version of the app and replace them with new instances of the older 1.0 version.
You can connect with the Kubernetes community on the Slack channel, discussion board, or join the Kubernetes-dev Google group. A weekly community meeting takes place via video conference to discuss the state of affairs, see these instructions for information on how to participate.
Get the latest news and updates.
Check out the project and consider contributing.
Our Slack channel is the best way to contact our engineers and share your ideas with them.
Our user forum is a great place to go for community support.