15 Mar 2020
Best Practices of Containerization
   
Judelyn Gomes
#IT | 4 Min Read
Docker is currently the most invaded and popular container platform in the technological world. They have been open source from their inception and this has led Docker to dominate the current technology market. Currently, 30% of enterprises use Dockers in their AWS ecosystem and the number continues to develop.

The essential elements necessary to run a Docker container should be built before it can be run. Docker is the most efficient tool that makes it easier to create, deploy, and run applications by using containers. It is designed to benefit both developers and system administrators, making it a part of many DevOps (developers + operations) toolchains. For developers, it grants them the freedom to focus on writing code without worrying about the system that it will ultimately be running on. It also allows them to get a head start by using one of the thousands of programs already designed to run in a Docker container as a part of their application. For the operations team, Docker gives flexibility and potentially reduces the number of systems needed because of its small footprint and lower overhead.

However, with a large number of teams adopting Dockers to follow trends, there are a few gaps we often come across in the usage of Dockers. These gaps often lead to critical cost, performance and security issues. Here are a few tips with which these gaps can be avoided or mitigated. Note that these are basic checkboxes for your Docker setup. One size doesn’t fit all is still applicable and you should tweak these based on your use case.

  • By definition, containers are ‘lightweight’, that means in most use cases image size doesn’t need to exceed 200 MB.
  • Smaller size images also mean faster builds. If your build and deployment time is more than 15 mins, then you need to fix it. This would help you save significant time and cost on your build Resources.
  • Effectively use docker layering, i.e. every subsequent image version must have the difference written at the bottom of the Dockerfile.
  • Your container should be self-contained i.e. no external dependencies.
  • Only one image propagates from the Dev environment to production. If you’re building an image again for production then you’re not leveraging the ‘Consistency’ benefits of docker.
  • Logs should be sent to stdout & stderr i.e. don’t log to a file, instead log to “console”. Docker automatically forwards all standard output from containers to the built-in logging driver.
  • Run as a user with the least privileges; don’t use sudo with any docker command. This may lead to major security loopholes.
  • Don’t use the ‘latest’ tag, it is not immutable. Have more control over your versioning system.
  • Even the non-technical team members should be able to run applications with simple docker pull and docker-compose up / docker run command.
  • Verify base docker images to avoid images from unknown sources to be pulled by enabling “Docker Content Trust” with the following command – export DOCKER_CONTENT_TRUST=1
  • Have a smaller build context by keeping a minimal number of files in the directory where you run the docker build command. You may also use .dockerignore to exclude files to be sent to Docker build context.
  • After using apt install command, ensure to clean /var/lib/apt/lists/* directory to remove the downloaded packages. This helps in reducing image size.

On the whole, Docker possesses the capability to get more applications running on the same hardware than other technologies, it makes it easier for developers to quickly create ready-to-run containerized applications. Docker also helps in managing and deploying applications much easier.