Deployment with docker compose

Vinit Kumar

27 Dec 2017

Deployment with docker compose

Docker Compose is a tool for defining and running multi-docker apps. It allows you to create and test applications based on multifaceted software stacks and libraries. In this blog we explore ways to use docker-compose to manage deployments.

Need for Docker Compose

An application can consist of multiple tiers or sub-components. In containerized deployment, these components need to be deployed as an individual unit. For example, if an application consists of database and caching server, then the database and caching server should be considered as individual components and should be deployed as a separate component. A very simple philosophy is, “Each container should run only one process”.

 

Running multiple containers using docker CLI is possible but really painful. Also, scaling any individual component might be a requirement but this adds more complexity in management of containers. Docker-Compose is a tool which addresses this problem very efficiently. It uses a simple YML file to describe complete application and dependency between them. It also provides the convenient way to monitor and scale individual components, which it termed as services. In the following section, we will see how to use docker compose to manage Charcha’s production ready deployment.

 

Using docker compose to manage Charcha’s production ready deployment

In the previous blog create production-ready docker image, we have created a production ready docker images ofcharcha.We are going to use the same image in this discussion. Let’s start with simple compose file. For the production system we need the following things:

 

  1. Database: As per our settings file, we need postgres.
  2. App Server: A production ready app server to serve our Django app. We are going to use gunicornfor this.
  3. Reverse Proxy WebServer: Our app server should be running behind a reverse proxy to prevent it from denial of service attack. Running gunicron behind a reverse proxy is recommended. This reverse proxy will also perform few additional things such as;3.1. Serve pre-gzipped static files from the application3.2 SSL offloading/termination. Read this to understand the benefits.

 

Let’s build each service step by step in docker-compose.yml file created at the root of the project. For brevity, every step will only add configs related to that step.

 

1) Create service for database

“`YAML version: ‘2’ services: db: # Service name # This is important, always restart this service if it gets stopped restart: always # Use postgres official image image: postgres: latest # Expose postgres port to be used by Web service expose:

 

2) Create service for an app

To create our app service we are going to use previously discussed (Dockerfile)[/2017/05/02/create-production-ready-docker-image] for charcha. This service will run, db migration(and hence need to linked with database service) and run a gunicorn application at 8000.

Here is the app config which needs to be added in a previously created docker-compose file.

  app:
    build: .
    # For this service run init.sh
    command: sh ./init.sh
    restart: always
    # expose port for other containers
    expose:
      - "8000"
    # Link database container
    links:
      - db:db
    # export environment variables for this container
    # NOTE: In production, value of these should be replaced with
    # ${variable} which will be provided at runtime.
    environment:
      - DJANGO_SETTINGS_MODULE=charcha.settings.production
      - DATABASE_URL=postgres://user:password@db:5432/charcha
      - DJANGO_SECRET_KEY=ljwwdojoqdjoqojwjqdoqwodq
      - LOGENTRIES_KEY=${LOGENTRIES_KEY}
    command: python manage.py migrate --no-input && gunicorn charcha.wsgi -b 0.0.0.0:8000

 

Create reverse proxy service:

To create reverse proxy service we are going to use official nginx image and will mount charcha/staticfiles folder into the nginx container. Before proceeding to create a docker-compose config for this service we need following things.

3.1 A SSL certificate:

Charcha production settings has been configured to only accept HTTPS requests. Now, instead of adding SSL certificate at App server, we will add certificate at nginx to offload SSL here. This will add performance gain. Follow these steps to create SSL certificate.

   $ mkdir -p deployment/ssl
   $ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout deployment/ssl/nginx.key -out deployment/ssl/nginx.crt
   $ openssl dhparam -out deplyment/ssl/dhparam.pem 4096

 

3.2 An nginx config file:

  # On linking service, docker will automatically add
  # resolver for service name
  # Use upstream to resolve the service name.
   upstream backend {
       server web:8000;
   }
   server {
     # listen for HTTPS request
     listen 443 ssl;
     access_log  /var/log/nginx/access.log;
     server_name charcha.hashedin.com;
     ssl_certificate /etc/ssl/nginx.crt;
     ssl_certificate_key /etc/ssl/nginx.key;
     ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
     ssl_prefer_server_ciphers on;
     ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
     ssl_ecdh_curve secp384r1;
     ssl_session_cache shared:SSL:10m;
     ssl_session_tickets off;
     add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
     add_header X-Frame-Options DENY;
     add_header X-Content-Type-Options nosniff;
     ssl_dhparam /etc/ssl/dhparam.pem;
     # Serve all pre-gziped static files from its mounted volume
     location /static/ {
         gzip_static on;
         expires     max;
         add_header  Cache-Control public;
         autoindex on;
         alias /static/;
     }
     location / {
         # Set these headers to let application to know that
         # request was made over HTTPS. Gunicorn by default read
         # X-Forwarded-Proto header to read the scheme
         proxy_set_header Host $host;
         proxy_set_header X-Real-IP $remote_addr;
         proxy_set_header X-Forwarded-Proto $scheme;
         proxy_redirect off;
         # Forward the request to upstream(App service)
         proxy_pass http://backend;
     }
   }
   server {
    # Listen for HTTP request and redirect it to HTTPS
    listen 80;
    server_name charcha.hashedin.com;
    return 301 https://$host$request_uri;
   }

Now, let’s define nginx service in docker-compose.

  nginx:
    image: nginx:latest
    restart: always
    ports:
      - 80:80
      - 443:443
    links:
      - web:web
    volumes:
      # deployment is the folder where we have added few configurations in previous step
      - ./deployment/nginx:/etc/nginx/conf.d
      - ./deployment/ssl:/etc/ssl
      # attach staticfiles(folder created by collectstatic) to /static
      - ./charcha/staticfiles:/static

 

Finally, we have completed our docker-compose and all required configuration and ready to start production like environment on a dev box. You can run the following steps to start playing with it.

  1. Run services: docker-compose up -d
  2. Verify all services are in running state docker-compose ps, you should out like;
    Name              Command              State          Ports
    -------------------------------------------------------------------------------
    charcha_db_1      docker-entrypoint.sh postgres   Up      5432/tcp
    charcha_nginx_1   nginx -g daemon off;            Up      0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp
    charcha_web_1     sh ./init.sh                    Up      8000/tcp
    

Now, you can start accessing the application at https://charcha.hashedin.com.

IMP: charcha.hashedin.com as hostname is checked from an application, so you can’t access it from localhost. Also, at this point, you didn’t have any DNS entry for this. To workaround, a simple trick is to use your /etc/hostsfile to do the local name resolution. sudo echo "127.0.0.1 charcha.hashedin.com >> /etc/hosts"

 

Additional stuffs to help in debugging

  1. To view the logs for all services use docker-compose logs
  2. In case you want to see the logs for a particular service use docker-compose logs <service-name> eg. docker-compose logs web
  3. For Login into running container ` docker exec -it <bash/sh(depends on image used)>`

Here is the final docker-compose file

version: '2'
# define multiple services
services:
  # Web service which runs gunicron application
  web:
    # Create build using Dockerfile present in current folder
    build: .
    # For this service run init.sh
    command: sh ./init.sh
    restart: always
    # expose port for other containers
    expose:
      - "8000"
    # Link database container
    links:
      - db:db
    # export environment variables for this container
    # NOTE: In production, value of these should be replaced with
    # ${variable} which will be provided at runtime.
    environment:
      - DJANGO_SETTINGS_MODULE=charcha.settings.production
      - DATABASE_URL=postgres://user:password@db:5432/charcha
      - DJANGO_SECRET_KEY=ljwwdojoqdjoqojwjqdoqwodq
      - LOGENTRIES_KEY=${LOGENTRIES_KEY}
  nginx:
    image: nginx:latest
    restart: always
    ports:
      - 80:80
      - 443:443
    links:
      - web:web
    volumes:
      # deployment is the folder where we have added few configurations
      - ./deployment/nginx:/etc/nginx/conf.d
      - ./deployment/ssl:/etc/ssl
      - ./charcha/staticfiles:/static
  db:
    restart: always
    image: postgres:latest
    expose:
      - 5432
    environment:
      - POSTGRES_PASSWORD=password
      - POSTGRES_USER=user
      - POSTGRES_DB=charcha

Summary

In deployment with docker compose series, till now we read how to create production-ready docker image and use it with docker-compose. You can try this in production with little changes(like reading environment variables instead of hard-coding in compose file) on a single large VM.

 

In coming blogs we will further discuss gaps with docker-compose and be using ECS / Swarm /Kubernets like container management services, in a production environment to fill those gaps.


Have a question?

Need Technology advice?

Connect

+1 669 253 9011

contact@hashedin.com

facebook twitter linkedIn youtube