What is Virtualization – The history of Virtualization – Disadvantages of Virtualization

0
843
what is Virtualization - The history of Virtualization - Disadvantages of Virtualization
what is Virtualization - The history of Virtualization - Disadvantages of Virtualization

As a part of the Kubernetes tutorials for beginners series, my goal is to offer a full and solid list of articles that goes through the very basics definitions, the history and the need to use Kubernetes and containers until i reach the deep parts, so regardless of your technical background, i will offer you everything you need here to master Kubernetes step by step.

Personally, I learn best through stories, and in order to understand something new, I really prefer to start it from the zero and develop the full story in my head, i don’t like to dive directly on the install and usage process, but also how and why we got here, that’s why we are going to discuss today the story of Virtualization.

What is Virtualization?


Virtualization is the technique of importing a Guest operating system on top of a Host operating system. This technique was a revelation at the beginning because it allowed developers to run multiple operating systems in different virtual machines all running on the same host. This eliminated the need for extra hardware resource. The advantages of Virtual Machines or Virtualization are:

  • Multiple operating systems can run on the same machine
  • Maintenance and Recovery were easy in case of failure conditions
  • Total cost of ownership was also less due to the reduced need for infrastructure
What is Virtualization
What is Virtualization

In the diagram, you can see there is a host operating system on which there are 3 guest operating systems running which is nothing but the virtual machines.

Virtualization history


I started piecing the history of virtualization, and I found out that to fully understand it, we need to go back much further than you would think. We need to go back to the early days of virtualization in the 1960s! 

so let’s take a tour with my time machine 😀

Centralized computing

On the 1960s, multiple computers were connected to a single mainframe, which allowed computing to be done at a central location. Centralized computers made it possible to control all processing from a single location, so if one terminal were to break down, the user could simply go to another terminal, log in there, and still have access to all of their files.

However, this did have some disadvantages. For example, if the user were to crash the central computer, the system would go down for everyone. Issues like this made it apparent that computers needed to be able to separate out not only individuals but also system processes.

With the creation of centralized computers, we began to see the first hints of what we now call virtualization.

chroot: the grandfather of all Linux virtualization

In 1979, we took another step towards creating shared, yet isolated, environments with the development of the chroot (change root) command. This command made it possible to change the apparent root directory for a running process, along with all of its children. This made it possible to isolate system processes into their own segregated filesystems so that testing could occur without impacting the global system environment. In March 1982, Bill Joy added the chroot command to the 7th edition of Unix.

chroot
chroot

2000 FreeBSD jails

On March 4, 2000, FreeBSD introduced the jail command into its operating system. Although it was similar to the chroot command, it also included additional process sandboxing features for isolating filesystems, users, networks, etc. FreeBSD jail gave us the ability to assign an IP address, configure custom software installations, and make modifications to each jail. This wasn’t without its own issues, as applications inside the jail were limited in their functionality.

2004: Solaris Containers

In 2004, we saw the release of Solaris containers, which created full application environments through the use of Solaris Zones. With zones, you can give an application full user, process, and filesystem space, along with access to the system hardware.  However, the application can only see what is within its own zone.

2006: Process Containers

In 2006, engineers at Google announced their launch of process containers designed for isolating and limiting the resource usage of a process. In 2007, these process containers were renamed control groups (cgroups) to avoid confusion with the word container.

2008: LXC

In 2008, cgroups were merged into Linux kernel 2.6.24, which led to the creation of the project we now know as LXC. LXC stands for Linux Containers and provides virtualization at the operating system level by allowing multiple isolated Linux environments (containers) to run on a shared Linux kernel. Each one of these containers has its own process and network space.

2013: LMCTFY

In 2013, Google changed containers once again by open-sourcing their container stack as a project called Let Me Contain That For You (LMCTFY). Using LMCTFY, applications could be written to be container-aware and thus, programmed to create and manage their own sub-containers. Work on LMCTFY was stopped in 2015 when Google decided to contribute the core concepts behind LMCTFY to the Docker project libcontainer.

The Rise of the famous Docker

Docker was released as an open-source project in 2013. It gives the ability to package containers so that they could be moved from one environment to another. Docker initially relied on LXC technology, but in 2014, LXC was replaced with libcontainer, which enabled containers to work with Linux namespaces, libcontainer control groups, capabilities, AppArmor security profiles, network interfaces, and firewall rules. Docker continued its contributions to the community by including global and local container registries, a restful API, and a CLI client. Later, Docker implemented a container cluster management system called Docker Swarm.

Now get the f* out of my time machine 😀 !

Disadvantages of Virtualization


As you know nothing is perfect, Virtualization also has some shortcomings. Running multiple Virtual Machines in the same host operating system leads to performance degradation. This is because of the guest OS running on top of the host OS, which will have its own kernel and set of libraries and dependencies. This takes up a large chunk of system resources, i.e. hard disk, processor and especially RAM.

Another problem with Virtual Machines which uses virtualization is that it takes almost a minute to boot-up. This is very critical in case of real-time applications.

Following are the disadvantages of Virtualization:

  • Running multiple Virtual Machines leads to unstable performance
  • Hypervisors are not as efficient as the host operating system
  • Boot up process is long and takes time

These drawbacks led to the emergence of a new technique called Containerization, which is the topic of our next article, So we will understand what is containerization, what is a container, the most used containerization engines and finally the benefits of containerization.

Be ready! 😀

LEAVE A REPLY

Please enter your comment!
Please enter your name here