Dockers and other container-virtualisation systems are revolutionising IT at an astounding speed. The analyst firm Gartner predicts that by 2022, over 75% of companies will run applications based on containers such as Docker – currently it is about 30%. Docker is not only successful, but also popular with developers: in a survey conducted by the developer community Stackoverflow in 2019, Docker achieved first place in the category ‘Most wanted platform’ and second place in ‘Most loved platform’.
Container or virtual machine – which is better suited for your project?

Some already see the end of virtual machines (VM) coming. After all, both containers and VMs provide virtual resources on which applications are hosted. If you're looking for the best solution for running your own services in the cloud, you need to understand both forms of this virtualisation technology. So what are the differences between VMs and containers?
A VM emulates a computer system through software. This makes it possible to run many of these VMs separately on a single piece of hardware, the host server. The software of the VM – i.e. operating systems like Linux or Windows and corresponding applications – share hardware resources like the hard disks, RAM and CPU of the host server.
Each VM has its own complete operating system running on emulated hardware. This is simulated by a software program called a hypervisor. It sits between the operating system of the host server and the VM. The Open Telekom Cloud also offers VMs. These are based on the open source hypervisor KVM (kernel-based VM).
- Cost-effectiveness: The main advantage of VMs over ‘real’ servers is their cost-efficiency: it is more economical to emulate multiple systems on a host server instead of running the same number of systems on dedicated hardware, so-called bare metal servers.
- Easier management: Applications are often easier to manage when they are on separate systems. Programs such as Exchange or database applications often require their own hardware because the processing power required to control them is much greater than for other applications.
- Flexibility: VMs allow different operating systems to run on the same server.
- System-resource usage: Each VM runs not only a complete copy of an operating system, but also a virtual simulation of all the hardware required to run that operating system. This quickly adds up to a lot of memory and CPU cycles that are blocked.
- Performance: VMs are slower than real machines because they access the hardware indirectly via the hypervisor.
Like VMs, containers are a way to virtualise. But they do not virtualise an entire computer system, only the operating system. Typically, a container contains only one application with all the binaries, libraries and configuration files it needs. Each container shares the kernel of the host operating system and usually also its binaries and libraries. These components used by all containers are read-only. Because of this shared use, it is not necessary to copy the code of the operating system several times.
That is why containers are exceptionally small – they are only a few megabytes in size and, therefore, take only seconds to start. VMs, on the other hand, often take minutes to get up and running. The small size of containers allows a very large number of them to run simultaneously on a host server.
Today, containers play a major role in agile software development because they allow for the testing of countless versions of an application with all its dependencies simultaneously. They also make it possible to break down very large and complex software architectures into software components, so-called microservices. Each application process runs as a microservice in its own container and communicates with other processes via an API. This makes it possible to change or redeploy individual microservices independently of the others at any time without endangering the stability of the entire software architecture. Thus, companies can quickly build, scale and develop large architectures during operation. Examples of companies that rely on microservices are Google, Amazon and Netflix.
The most used platform for container virtualisation is Docker. Its name is often used as a synonym for the technology. The Cloud Container Engine of the Open Telekom Cloud is also based on Docker.
- Fewer costs: Containers require fewer system resources than traditional or virtual machines because they contain only the data needed to run the application. With containers, companies can greatly reduce the number of their servers and necessary licenses.
- Portability: Once ‘containerised’, applications can be deployed and moved on any infrastructure – VMs, bare metal and various public clouds with different hypervisors. DevOps teams know that applications in containers will always run in the same way regardless of their location.
- Greater efficiency: Applications can be deployed, patched or scaled faster in containers than in VMs. Containers can greatly accelerate development, testing and production cycles, for example.
- Difficult persistence of data: The design of containers is such that all data disappear when the container is shut down unless you first save them to another location.
- Not all applications benefit from containers: In general, only applications designed to run as microservices can get the most out of containers.
- Security: The shared Linux kernel provides far more vulnerability to attacks than a hypervisor in a VM. If an attacker succeeds in accessing the kernel from a container, all containers attached to it are usually affected. VMs, therefore, tend to isolate applications better than containers.
If your company runs a large number of instances of the same operating system, you should check whether containers are suitable for you. They could save you significant time and money compared to VMs. Compared to VMs, containers are best-suited for these use cases:
- Creating cloud-native applications
- Operating microservice architectures
- Implementing DevOps practices in development
- Moving IT projects across different infrastructures that use the same operating system

VMs are the better choice for running applications that require all the resources and features of the operating system when you need to run multiple applications on servers or manage a variety of operating systems. Compared to containers, VMs are best-suited for these situations:
- Providing infrastructural resources such as networks, servers and data
- Running an operating system within another operating system (e.g. Unix under Linux)
- Operating legacy systems in the cloud
- Isolating risky development cycles
Although containers offer many advantages over VMs, they will not drive them out of the market, as there are still use cases where VMs are more viable. In addition, VMs still have a right to exist as long as highly demanded software vendors do not offer alternative productive container solutions.
In any case, containers and VMs should be seen as complementary rather than competing forms of technology. This is because containers can also run in VMs. On the one hand, this increases the isolation and thus the security. On the other hand, virtualisation makes it easier to manage the hardware infrastructure such as the networks, servers and storage needed to run containers. The flexibility of VMs and the minimal resource requirements of containers together create IT environments with maximum functionality.
The Cloud Container Engine (CCE) of the Open Telekom Cloud supports the creation of container clusters with both VMs (ECS) and bare metal servers. Our consultants will be happy to help you find the right system architecture for you.