This is the final part of Analysis of Docker in DevOps. In part I of Analysis of Docker in DevOps, we discussed the basic terms and concepts around Docker and DevOps for revision. In part II of Analysis of Docker in DevOps, we had a discussion on different types on requirements on Docker and DevOps. In this final part, we will discuss matters like Employee Onboarding, Platform independence, Logging, Configuration Management and draw conclusion.
Analysis of Docker in DevOps : Other Requirements & Platform Independence
Accordingly, many workers are needed during the project phase to implement the applications in the given time. Many agencies rely on freelancers and therefore always has to integrate new employees into the development environment. The configuration takes a lot of time because the development environment has to be explained and set up first. At the beginning of the project, therefore, one system administrator works for about 2 hours per freelancer. In this scenario, a company can greatly reduce the time spent on employee onboarding by using Docker. by using Docker, new people can be quickly and easily integrated into projects without spending much time configuring. Docker supports the Employee Onboarding of programmers by not requiring a complex configuration of all services, just installing the Docker Daemon and then having to start containers. If the live system is also based on Docker, then the live containers should also be used for development, so that the occurrence of the “works-on-my-machine” problem can be reduced to a minimum. Inconsistencies between the development and live system are now a thing of the past.
Any software company develops software components for various major customers. Customers need software solutions for their systems, which are divided into small components. They develop these on their development systems and then provides them to the customer who integrates them. It is tested on local development environments as well as on a development system. However, because different operating systems are installed on employees’ workstations, production-related software can not be created anywhere. Although the components are developed completely, the integration into the production system results in errors that did not occur during development and thus can not be reproduced there. With Docker, this scenario can save a lot of time integrating software components. Docker offers the possibility of having the same system requirements within the development as in the production system. Furthermore, automated tests can be started in an environment created on the basis of the live system’s containers. This also minimizes differences between staging and live environment.
To facilitate the use of Docker on host systems that do not directly meet the requirements of 4.1, Docker provides a docker-machine tool that completely automates the setup process of a preconfigured VM in Virtualbox.
Essentially, Docker-Machine only provisions a VM with a Docker Engine preinstalled, and then ensures that all environment variables required by the Docker Client are set correctly and point to the engine-provided daemon in the VM. However, because the Docker client communicates with the daemon using only an HTTP API, it is not only possible to deploy a local-virtualized Docker daemon through docker-machine. It can also be used analogously to link the docker client to a daemon on a remote server. The HTTP connection must ensure that the remote daemon provides the necessary ports.
Logging in Docker
Since all data written by a container at runtime is lost when the container is updated, log files should not be stored in the container itself. Even data volumes are not always available if several containers want to simultaneously write the same log file. A common solution here is a centralized log management system, for example with an ELK stack (we have guide on how to install ELK stack) . ELK stands for: Elasticsearch, which handles the data storage and acts as a search server; Logstash, which makes sure that the log files are read in and loaded into the search server; Kibana, wich ensures the evaluation or visualization of the log data. The advantages of a centralized log management system lie in the fact that Elasticsearch also allows fast log searches of large log volumes. In combination with Docker it is also possible to search the log files of several containers at the same time, so that not every single container has to be crawled. Another advantage is that the log files are still accessible after an exchange of the container. With tools like Logspout, Docker logs can be passed directly to Logstash, which then writes to Elasticsearch. It is only necessary that the application in the container transfers the logs to STDOUT or STDERR.
A problem in the DevOps area is the documentation of changes. On servers that deploy many applications simultaneously, it’s difficult to keep track of everything. Above all, changes to the configurations must be documented. Docker images are immutable and therefore not changeable. In order to change the configuration, the Dockerfile has to be adapted and a new container has to be built. Thus, the documentation about the configuration status of the container is immediately available. If one uses a version management for the Dockerfiles, also a full-fledged history/versioning over the configuration level. To document which container was started on which server, Docker can be combined with a configuration management system such as Chef/Puppet.
Docker containers are always only responsible for one task. However, since applications usually consist of several components, these usually consist of several containers. For example, a webshop today may consist of Load balancer, Web Server, Database, Storage, Cache server. To avoid having to start all containers individually during deployment, docker provides a solution for launching multi-container applications with docker-compose. The configuration of the individual services is described in a
docker-compose.yml file in the project.
In conclusion, the use of DevOps is now indispensable in operations that have a development and operational area. The methods and tools that DevOps uses for coordination and deployment are very helpful in team collaboration and the release of new software.
Docker’s analysis has shown that Docker can address various issues such as scalability, security, and platform independence, making it an interesting and recommended software for DevOps. The automation and configuration of server components can also be optimally mapped with Docker. Since Docker can be used quickly and easily, it is a software that is already used in many companies and is highly recommended.
At end, Docker is just one of many tools that help DevOps support the development and operations teams. Docker alone is not the perfect solution for the optimal deployment process, with additional tools like Jenkins and Vagrant recommended to increase and speed up the release of software releases. Docker is a software that benefits from it’s large community. Security updates are made on a regular basis and the software is continually evolving. In conclusion, it can be said that the software will become more and more important in the future and the further development will probably cover even more topics.
Follow the Author of this article :