Centralized logs using Logstash with RediSearch


Ankita Unhalkar and Nishant George

07 May 2020

Logs have been an essential part of troubleshooting application and infrastructure performance ever since its existence. It becomes tedious to manage all the logs as the number of applications increases. To make it faster, centralized logging is said to very helpful, and gives us the opportunity to aggregate logs from various applications to a central place and perform search queries against logs. This can be easily achieved by using an elastic stack.


Fig 1: Logstash with Elasticsearch



Logstash is the most powerful tool from the elastic stack which is used for log management. Logstash is a data processing pipeline that allows you to collect data from different sources, parse on its way, and deliver it to your most likely destination for future use. Elasticsearch is often used as a data store for logs processed with Logstash. Elasticsearch is a distributed full-text search engine that possesses the capability to highly refine analytics capabilities


RediSearch is also a full-text search and aggregation engine, built as a module on top of Redis. RediSearch is faster in performance compared to any other search engine like Elasticsearch or Solr. It is possible because of Redis’ robust in-memory architecture based on modern data structures and optimal code execution written in “C” language, whereas Elasticsearch is based on the Lucene engine and is written in Java programming language. RediSearch is powerful yet simple to manage and maintain and is efficient enough to serve as a standalone database or augment existing Redis databases with advanced, powerful indexing capabilities.


Fig 2: Logstash with Redisearch




Logstash data processing pipeline has three stages namely Input, Filter, and Output.


In order to store logs into RediSearch, a builtin plugin Logstash output plugin for RediSearch was created. This logstash output plugin helps receive log messages coming from logstash and stashes them into RediSearch. 


Let’s see some examples of the usage and configuration of RediSearch in the logstash pipeline.


  1. Filebeat
  2. Logstash
  3. Redisearch
  4. Redisinsight
  1. Filebeat has an input plugin to collect logs from various sources.
  2. RediSearch has an output plugin to store all incoming logs from the input plugin.
    1. Configure file: /etc/filebeat/filebeat.yml 
        – Enable filebeat input to read logs from the specified path and change the output from Elasticsearch to logstash.

# enable filebeat to read file
  - type: log
    enabled: true
      - /path/to/logfile

# setup filebeat to send output to logstash
  hosts: ["logstash:5044"]
    1. Configure the logstash pipeline by creating a file. For example we are creating a file called


              configure logstash input to listen to filebeat on port 5044

              configure logstash output to redisearch

              by default, redisearch is listening on port 6379


input {
  beats {
    port => 5044
output {
       redisearch {

Configuration options for redisearch:


Setting Input type Default
host string “”
port number 6379
index string “logstash-<current date>”
reconnect_interval number 1
batch_events number 50
batch_timeout number 5
ssl boolean false
password password


  1. Restart filebeat service and logstash service to apply the configuration changes.
  2. To check each log feeding in as a RediSearch document use a Redis visualization tool like Redisinsight.

The open-source project is hosted in the below link: 



Note that at this time, this plugin is in the preview stage and not yet recommended for production use. Do take it for a spin and unleash the power of Redis and RediSeach for your log monitoring needs!

Have a question?

Need Technology advice?


+1 669 253 9011


linkedIn youtube