A Guide To Maintain Containerized Applications

Pratik Tarte and Thenmozhi Arunachalam

19 May 2020

In real-world applications, dockerizing an app is not sufficient in itself, as there are a series of events to be followed once the app is containerized to maximize Docker benefits. In this article, we will look at how to operate and monitor containerized applications with the best practices.

 

Health Check for Containerized Applications

 

‘Prevention is better than cure.’, not only applies to our health but is also applicable to our applications. If we carry out regular health checks for an application, together with the services that it depends on, it helps in minimizing the business downtime. 

 

Usually, containerized applications are managed using an orchestration tool and the most famous amongst them being Kubernetes. Kubernetes makes use of the Readiness Probe and the Liveness Probe to find out when the container is ready to serve the traffic and when to restart the container, respectively. Since the Liveness Probe primarily checks the health of an application, it is important to write a detailed health-check for all the dependent services which have been used in the application instead of relying on shallow health-check. This ensures that our application is in good shape, utilizes all the resources to the desired extent, and also, allows us to detect issues before they affect the entire system. An automated health-check process can create a more efficient way of keeping track of the application. 

 

Example of Custom Health Check Implementation for a Database:

 


 @Autowired
   private DataSource dataSource;

   public BaseHealthBean performHealthCheck() {
       BaseHealthBean mySQlHealthBean = new MySQlHealthBean();
       try (Connection connection = dataSource.getConnection()) {
                mySQlHealthBean.getHealthCheckDetails().setHealthy(true);
           } catch (SQLException e) {
           log.error("Error making database connection", e);
           mySQlHealthBean.getHealthCheckDetails().setError(e.getMessage());
       }
       return mySQlHealthBean;
   }

 

Integration with a Performance Monitoring App

 

Performance monitoring apps are a key part of your application and most companies use one or more tools to monitor the service performance. APM service helps to achieve the application insights using different methods, also it is used to identify the end-user satisfaction level.

 

Let’s consider NewRelic as a leading performance monitoring tool.

 

 

Sample NewRelic APM Dashboard

 

 

Metrics for Containerized Applications

 

To scale an application to provide a reliable service and performance, we need to understand how the application performs in a highly available infrastructure when it gets deployed. Using the container API, we can collect all the container resource information like CPU, memory, network utilization, etc. These metrics are tracked using different monitoring tools and analyzed in real-time for better performance. It also helps to understand the lifecycle policy of the containers.

 

We can monitor the application performance by understanding the characteristics of containers, pods, service, network, and overall clusters status using the API metrics. By scraping these metrics data to the advanced event monitoring dashboards like Grafana, we can virtualize the KPIs for an application. Moreover, dashboards like these, that read data from the Prometheus metrics can play an instrumental role in data analytics, cost optimization, and can see through the complexity we manage every day, which leads to improving the user experience.

 

 

Spring Boot Properties using Actuator for Metrics API:

 

management.metrics.export.prometheus.enabled=true

management.health.redis.enabled=true

management.endpoints.web.path-mapping.metrics=/metrics-json

management.endpoints.web.path-mapping.prometheus=/metrics

management.metrics.distribution.percentiles[http.server.requests]=0.50, 0.75, 0.95, 0.99

 

Sample Grafana Dashboard

 

 

Logging and Storage

 

For a multi-container environment, you would have to aggregate logs. This is possible with docker logging drivers, shippers, and then sending logs to visualization/search tool for logs. Docker demon provides a log driver as a plugin, by default it is a JSON file handler. Due to several limitations, you wouldn’t want to use the default driver in a production setup. The ideal setup would be to use Fluentd Driver, which would also help us shipping logs to the eventual visualization tool.

 

A typical open source stack would use Fluentd with ELK and create separate indexes for infra, container, and application logs for proper visualization. There are several other plugins that are commercial or open-source, which may be better suited for different requirements. 

 

Once you have logs, you will often worry about persistence storage. The ideal situation should use inexpensive but persistent storage like magnetic block storage. The persistent volume will help to achieve high availability fault tolerance and consistency. Based on the LifeCycle Policy, the logs can be categorized and archived in cloud storage for future analysis. 

 

Conclusion

 

In a nutshell, monitoring the containerized applications for KPIs, health, and facilitating the proper logging and storage, is as important as, containerizing and deploying them. With the help of apps and methodologies described in this blog, it becomes a lot easier to manage and monitor your containerized applications, prepare for any contingencies beforehand, and make the application more reliable.


Have a question?

Need Technology advice?

Connect

+1 669 253 9011

contact@hashedin.com

linkedIn youtube