Logging is one of the most crucial parts of any application. Proper logging can save countless hours and mind hassles that a developer and support team might face while debugging. Not only the developers but also the system admins and security team have a lot to gain from logging. A well-designed logging system is an important asset for any project. Django’s built-in logging system makes it very easy to set up logging for your application.
Django applications can easily distribute logs to multiple destinations. In an enterprise setup, it’s likely that your logs would be spread across multiple systems and you’d need to login to each of them to pull the logs.
This is where centralized logging shines. Following are the pros of using centralized logging:
- All the log data is stored in a single place, no need of pulling data from multiple servers.
- Log data can be searched, exported, archived without affecting the original log files.
- Log data can be parsed, this helps when your log data is received in multiple formats like JSON, XML, Plain Text, etc.
How to setup?
You can use a lot of services to get your logs. For the scope of this blog we’ll be focusing on:
Using Logentries
- Create a Logentries Account
- Choose Python in Libraries

- Create a log set

- From pip install the python logging package for Logentries.
- Generate a log token

In your settings.py file(preferrably production settings) add the following:
LOGENTRIES_TOKEN = 'your-logentries-token'
LOGGING = {
# ...
'handlers': {
# ...
'logentries': {
'level': 'INFO',
'token': LOGENTRIES_TOKEN,
'class': 'logentries.LogentriesHandler',
},
},
'logger': {
# ...
'app_name': {
# file will accept only ERROR and higher level
'handlers': ['file', 'logentries'],
'level': DEBUG,
},
},
}
Using ELK Stack
ELK Stack is comprised of three products i.e. Elasticsearch, Logstash, and Kibana. All of these are open source products by Elastic.
Elasticsearch is a distributed RESTful search engine built for the cloud and based on Lucene.
Logstash is an open source, server-side data processing pipeline that receives data from a multitude of sources simultaneously transforms it, and then sends it to your target stash.
Kibana is a browser-based analytics and search dashboard for Elasticsearch and is your window into the Elastic Stack.
Apart from collecting the log data, you can also generate graphical dashboards and reports using Kibana.
We will be using an Amazon EC2 instance running Ubuntu 16.04 to collect logs from a local Django app. The ELK stack would be run on a docker.
Setup EC2 Machine with ELK
- Launch an Amazon EC2 with Ubuntu 16.04 (with at least 3GB RAM)
- SSH into the machine
sudo ssh -i ~/.ssh/ ubuntu@
- Add GPG key for official Docker repository
sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
- Add Docker repository to APT sources
sudo apt-add-repository 'deb https://apt.dockerproject.org/repo ubuntu-xenial main'
- Update the package database with the Docker packages
sudo apt-get update
- Check that installation source is the Docker repo instead of the default Ubuntu 16.04 repo
apt-cache policy docker-engine
The output should be similar to
docker-engine: Installed: (none) Candidate: 17.05.0~ce-0~ubuntu-xenial Version table: 17.05.0~ce-0~ubuntu-xenial 500 500 https://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages 17.04.0~ce-0~ubuntu-xenial 500 500 https://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages 17.03.1~ce-0~ubuntu-xenial 500 500 https://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages
- Notice that docker-engine is not installed, but the candidate for installation is from the Docker repository for Ubuntu 16.04. The docker-engine version number might be different.
- Install Docker
sudo apt-get install -y docker-engine
- Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it’s running
sudo systemctl status docker
The output should be similar to
● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2017-05-29 00:20:54 UTC; 32s ago Docs: https://docs.docker.com Main PID: 3223 (dockerd) CGroup: /system.slice/docker.service ├─3223 /usr/bin/dockerd -H fd:// └─3228 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/li
- Docker by default requires root privileges, to avoid adding sudo to every docker command add the user to docker group
sudo usermod -aG docker $(whoami)
- Logout from the machine and ssh again
- Install Docker compose
sudo curl -o /usr/local/bin/docker-compose -L "https://github.com/docker/compose/releases/download/1.11.2/docker-compose-$(uname -s)-$(uname -m)"
- Next, we’ll set the permissions:
sudo chmod +x /usr/local/bin/docker-compose
- Verify if the installation was successful by checking the version
docker-compose -v
Output should be
docker-compose version 1.11.2, build dfed245
- Instead of installing Elasticsearch, Logstash, and Kibana separately we’ll be using this GitHub repo docker-elk.
- Setup Docker Container for ELK
cd git clone https://github.com/deviantony/docker-elk
- Launch the stack
cd docker-elk docker-compose up -d
- After sometime check the status of your containers
docker ps
The output should be similar to
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8575dcd40d91 dockerelk_kibana "/bin/sh -c /usr/l..." 10 hours ago Up 10 hours 0.0.0.0:5601->5601/tcp dockerelk_kibana_1 ef994edf6107 dockerelk_logstash "/usr/local/bin/do..." 10 hours ago Up 10 hours 5044/tcp, 0.0.0.0:5000->5000/tcp, 9600/tcp dockerelk_logstash_1 22032232767e dockerelk_elasticsearch "/bin/bash bin/es-..." 10 hours ago Up 10 hours 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp dockerelk_elasticsearch_1
- Now test if Kibana is running by going to
<ec2-ip>:5601
NOTE: In case you face Elasticsearch out of memory error you would need to increase the max_map_countvi /etc/sysctl.conf
Addvm.max_map_count = 262144
Using ELK in Django
- In your settings.py file, add
LOGGING = {
# ...
'handlers': {
# ...
'logstash': {
'level': 'INFO',
'class': 'logstash.TCPLogstashHandler',
'port': 5000,
'host': '192.0.0.1', # IP/Name of the EC2 instance,
'version': 1,
'message_type': 'logstash',
'fqdn': True,
'tags': ['meetnotes'],
},
},
'logger': {
# ...
'app_name': {
# file will accept only ERROR and higher level
'handlers': ['file', 'logstash'],
'level': DEBUG,
},
},
}
Things to Note
- To fully utilize the search capability of Elasticsearch it is recommended to use JSON for log messages. This can be easily set up by updating the file
logstash.conf
.cd ~/docker-elk/logstash/pipeline/ vi logstash.conf
Update input and add codec information
input { tcp { port => 5000 codec => json } }
- For production environments, it is recommended to use Amazon ECS in order to scale up. Since docker is being used and scaling up can become tedious, ECS can help you to quickly scale up or down.
Summary
Centralized logging can be set up easily and it will certainly be of help when we have debugging issues. So don’t wait for the moment when you feel that life would have been easier if we had logs.