A Guide to Setup Centralized Logging in Django - HashedIn Technologies

A Guide to Setup Centralized Logging in Django

Technology - 29 May 2017
Sripathi Krishnan

Logging is one of the most crucial parts of any application. Proper logging can save countless hours and mind hassles that a developer and support team might face while debugging. Not only developers, system admins and security team also have a lot to gain from logging. A well designed logging system is an important asset for any project.

Django’s builtin logging system makes it very easy to setup logging for your application. Django applications can easily distribute logs to multiple destinations. In an enterprise setup, it’s likely that your logs would be spread across multiple systems and you’d need to login to each of them to pull the logs.

This is where centralized logging shines. Following are the pros of using centralized logging:

  1. All the log data is stored at a single place, no need of pulling data from multiple servers.
  2. Log data can be searched, exported, archived without affecting the original log files.
  3. Log data can be parsed, this helps when your log data is received in multiple formats like JSON, XML, Plain Text, etc.

How to setup?

You can use a lot of services to get your logs. For the scope of this blog we’ll be focusing on:

  1. Logentries
  2. ELK Stack (Elasticsearch, Logstash and Kibana)

Using Logentries

  1. Create a Logentries Account
  2. Choose Python in Libraries
Python Librarie
Choose Python Libraries
  1. Create a log set
Create a Log Set
Create a Log Set
  1. From pip install the python logging package for Logentries.
  2. Generate a log token
Generate a Log Token
Generate a Log Token

In your settings.py file(preferrably production settings) add the following:


Using ELK Stack

ELK Stack is comprised of three products i.e. ElasticsearchLogstash and Kibana. All of these are open source products by Elastic.

Elasticsearch is a distributed RESTful search engine built for the cloud and based on Lucene.

Logstash is an open source, server-side data processing pipeline that receives data from a multitude of sources simultaneously, transforms it, and then sends it to your target stash.

Kibana is browser-based analytics and search dashboard for Elasticsearch and is your window into the Elastic Stack.

Apart from collecting the log data you can also generate graphical dashboards and reports using Kibana.

We will be using an Amazon EC2 instance running Ubuntu 16.04 to collect logs from a local django app. The ELK stack would be run on a docker.

Setup EC2 Machine with ELK

  1. Launch an Amazon EC2 with Ubuntu 16.04 (with atleast 3GB RAM)
  2. SSH into the machine

  3. Add GPG key for official Docker repository

  4. Add Docker repository to APT sources

  5. Update the package database with the Docker packages

  6. Check that installation source is the Docker repo instead of the default Ubuntu 16.04 repo

    Output should be similar to

  7. Notice that docker-engine is not installed, but the candidate for installation is from the Docker repository for Ubuntu 16.04. The docker-engine version number might be different.
  8. Install Docker

  9. Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it’s running

    Output should be similar to

  10. Docker by default requires root privileges, to avoid adding sudo to every docker command add the user to docker group

  11. Logout from the machine and ssh again
  12. Install Docker compose

  13. Next we’ll set the permissions:

  14. Verify if the installation was successful by checking the version

    Output should be

  15. Instead of installing Elasticsearch, Logstash and Kibana separately we’ll be using this github repo docker-elk.
  16. Setup Docker Container for ELK

  17. Launch the stack

  18. After sometime check the status of your containers

    Output should be similar to

  19. Now test if Kibana is running by going to <ec2-ip>:5601

NOTE: In case you face Elasticsearch out of memory error you would need to increase the max_map_count

vi /etc/sysctl.conf

Add vm.max_map_count = 262144

Using ELK in Django

  1. In your settings.py file, add

Things to Note

  • To fully utilise the search capability of Elasticsearch it is recommended to use JSON for log messages. This can be easily setup by updating the logstash.conf file.

    Update input and add codec information
  • For production environments it is recommended to use Amazon ECS in order to scale up. Since docker is being used and scaling up can become tedious, ECS can help you to quickly scale up or down.

Summary

Centralized logging can be setup easily and it will certainly be of help when we have debugging issues. So don’t wait for the moment when you feel that life would have been easies if we had logs.

Free tag for commerce

E-book on Digital Business Transformation