Menu
Home
Log in / Register
 
Home arrow Computer Science arrow Microservices Flexible Software Architecture
Source

Virtualization or Cloud

It is hardly possible to install new physical hardware upon the deployment of a new microservice. Besides, microservices profit from virtualization or a Cloud, since this renders the infrastructures much more flexible. New virtual machines for scaling or testing environments can easily be provided. In the continuous delivery pipeline microservices are constantly started to perform different tests. Moreover, in production new instances have to be started depending on the load.

Therefore, it should be possible to start a new virtual machine in a completely automated manner. Starting new instances with simple API calls is exactly what a Cloud offers. A cloud infrastructure should be available in order to really be able to implement a microservice-based architecture. Virtual machines that are provided by operation via manual processes are not sufficient. This also demonstrates that microservices can hardly be run without modern infrastructures.

Docker

When there is an individual virtual machine for each microservice, it is laborious to generate a test environment containing all microservices. Even creating an environment with relatively few microservices can be a challenge for a developer machine. The usage of RAM and CPU is very high for such an environment. In fact, it is hardly sensible to use an entire virtual machine for one microservice. In the end, the microservice should just run and integrate in logging and monitoring. Therefore, solutions like Docker are convenient: Docker does not comprise many of the normally common operating system features.

Instead Docker[1] offers a very lightweight virtualization. To this purpose Docker uses different technologies:

  • • In place of a complete virtualization Docker employs Linux Containers.[2] Support for similar mechanisms in Microsoft Windows has been announced. This enables implementation of a lightweight alternative to virtual machines: All containers use the same kernel. There is only one instance of the kernel in memory. Processes, networks, data systems, and users are separate from each other. In comparison to a virtual machine with its own kernel and often also many operating system services, a container has a profoundly lower overhead. It is easily possible to run hundreds of Linux containers on a simple laptop. Besides, a container starts much more rapidly than a virtual machine with its own kernel and complete operating system. The container does not have to boot an entire operating system; it just starts a new process. The container itself does not add a lot of overhead since it only requires a custom configuration of the operating system resources.
  • • In addition, the file system is optimized: basic read-only file systems can be used. At the same time additional file systems can be added to the container, which also enables writing. One file system can be put on top of another file system. For instance, a basic file system can be generated that contains an operating system. If software is installed in the running container or if files are modified, the container only has to store these additional files in a small container-specific file system. In this way the memory requirement for the containers on the hard drive is significantly reduced.

Besides, additional interesting possibilities arise: For example, a basic file system can be started with an operating system, and subsequently software can be installed. As mentioned, only changes to the file system are saved that are introduced upon the installation of the software. Based on this delta a file system can be generated. Then a container can be started that puts a file system with this delta on top of the basic file system containing the operating system—and afterwards additional software can be installed in yet another layer. In this manner each “layer” in the file system can contain specific changes. The real file system at run time can be composed from numerous such layers. This enables recycling software installations very efficiently.

Filesystems in Docker

Figure 11.4 Filesystems in Docker

Figure 11.4 shows an example for the file system of a running container: The lowest level is an Ubuntu Linux installation. On top there are changes that have been introduced by installing Java. Then there is the application. For the running container to be able to write changes into the file system, there is a file system on top into which the container writes files. When the container wants to read a file, it will move through the layers from top to bottom until it finds the respective data.

  • [1] https://www.docker.com/
  • [2] https://linuxcontainers.org/
 
Source
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
 
Subjects
Accounting
Business & Finance
Communication
Computer Science
Economics
Education
Engineering
Environment
Geography
Health
History
Language & Literature
Law
Management
Marketing
Mathematics
Political science
Philosophy
Psychology
Religion
Sociology
Travel