Docker is a platform to create containers with applications which can be launched on any system that has Docker. And it doesn’t matter what the programming language is used. For example, a company «X» is engaged in software development. Let’s suggest a user wanted to download and install this software on his server. It means that the administrator will have to download all the source codes and dependencies, install the compiler into the system, and then compile and run the program. It’s quite difficult. In order to simplify this process, there are repositories where you do not need to download the source codes, compile them with all the necessary parameters and dependencies. It’s enough to simply enter the command “apt-get install app_name” and the package manager will install the application with all the required dependencies. But there is a drawback to this approach. By no means, all companies focus on preparing and maintaining a repository for all popular Linux platforms. Besides applications are not always kept up to date. The Docker idea is that the manufacturer can choose the program with all the necessary dependencies and distribute it as a container that will run on the Docker platform. All you need is to download and run a container and then to update it.
Further on I will try to explain in a more detailed way.
Let’s have a look at the Docker architecture.
Docker daemon is a daemon which can run on the host machine and download upload images, run the containers from images, keep an eye on launched containers, collect logs and configure the relationship between the containers. And it is the daemon which creates images of containers, although it looks like the docker-client does it.
Docker client is a command line utility to manage the docker daemon via HTTP. It’s arranged very simply and works extremely quickly. Contrary to common misbelief you can manage the daemon docker from anywhere, not only from the same machine.
Docker image is a read-only template. For example, the image can contain OS Ubuntu with Apache and applications on it. The images are used to create the containers. Docker allows to create new images very easily, to update the ones that have already been created, or to download the images which have been created by other people.
Docker registry stores images. There are public and private registries. Public Docker registry is Docker Hub. A huge collection of images is stored there. As you know, the images can be created by you or you can use the images created by other people.
Containers are similar to directories. There is everything you need to run the application in these containers. Each container is created from an image. Containers can be created, launched, stopped, moved or deleted. Each container is insulated and is a secure platform for applications.
How does the image work?
As we know that the image is a read-only template which you can use to create a container. Every image consists of a few levels. Docker uses the union file system to combine these levels into a single image. The union file system allows files and directories from various file systems to overlay transparently, creating a consistent file system. Usage of the multilevel system is one of the main reasons why the docker is lightweight. When you change the image, for example, when you update the application you create a new level. So, you don’t need to replace the whole image or to rebuild it, as you may have to do with the virtual machine. You only need to add a new level and update the image. There is a basic image at the core of each image. For example, Ubuntu is a basic image of Ubuntu or Fedora. You can also use images as the basis to create new images. For example, if you have the image of Apache, you can use it as a base image for your web applications.
Note! Docker usually takes images from the Docker Hub registry.
Docker images can be created from the basic images, the steps of description aimed at creating images, are called instructions. Each instruction creates a new image or level. Instructions are as follows:
These instructions are stored in the file Dockerfile. Docker reads this Dockerfile, executes these instructions, and brings the final image back.
A registry is a repository of docker images. After creating the image, you can publish it on the public register Docker Hub or on your personal registry. Via the docker client, you can search for images which have been already published and download them to your machine where the docker is installed in order to create containers. Docker Hub provides public and private storages of images. Searching and downloading the images from public storage is available to all. The content of private storages is not included in search results. Only you and your users can receive these images and create containers from them.
The container consists of an operating system, user files and metadata. As we know, each container is created from an image. This image explains to the docker what is there in the container, what process should be run and when the container and other configuration data must be run. Docker image is read-only. When the docker container launches, it creates a level of reading/writing in the top of the image (using a union file system, as it was mentioned earlier). And in the level, we can run our application.
$ sudo docker run -i -t ubuntu /bin/bash
When we run this command, Docker does the following:
As you can see, thanks to Docker we can develop and deploy applications faster. Everybody wins here: developers spend less time on project configuration, business owners get projects done faster and reduce time-to-market.
We will be glad to implement this approach on your next project.