19 May 2020
A Guide To Maintain Containerized Applications
Pratik Tarte
#IT | 5 min read
A Guide To Maintain Containerized Applications
Pratik Tarte

In real-world applications, dockerizing an app is not sufficient in itself, as there are a series of events to be followed once the app is containerized to maximize Docker benefits. In this article, we will look at how to operate and monitor containerized applications with the best practices.


Health Check for Containerized Applications


‘Prevention is better than cure.’, not only applies to our health but is also applicable to our applications. If we carry out regular health checks for an application, together with the services that it depends on, it helps in minimizing the business downtime. 


Usually, containerized applications are managed using an orchestration tool and the most famous amongst them being Kubernetes. Kubernetes makes use of the Readiness Probe and the Liveness Probe to find out when the container is ready to serve the traffic and when to restart the container, respectively. Since the Liveness Probe primarily checks the health of an application, it is important to write a detailed health-check for all the dependent services which have been used in the application instead of relying on shallow health-check. This ensures that our application is in good shape, utilizes all the resources to the desired extent, and also, allows us to detect issues before they affect the entire system. An automated health-check process can create a more efficient way of keeping track of the application. 


Example of Custom Health Check Implementation for a Database:


   private DataSource dataSource;

   public BaseHealthBean performHealthCheck() {
       BaseHealthBean mySQlHealthBean = new MySQlHealthBean();
       try (Connection connection = dataSource.getConnection()) {
           } catch (SQLException e) {
           log.error("Error making database connection", e);
       return mySQlHealthBean;


Integration with a Performance Monitoring App


Performance monitoring apps are a key part of your application and most companies use one or more tools to monitor the service performance. APM service helps to achieve the application insights using different methods, also it is used to identify the end-user satisfaction level.


Let’s consider NewRelic as a leading performance monitoring tool.


  • Once we configured the Newrelic agent on the cluster, the performance metrics will start sending to Newrelic endpoints.
  • As a best practice, configure the application performance, key transactions, deployment alerts, and push notifications to respective channels. 
  • Create labels for the applications.
  • Verify the deployment history to identify the deployment impact caused to the end-user.


Sample NewRelic APM Dashboard



Metrics for Containerized Applications


To scale an application to provide a reliable service and performance, we need to understand how the application performs in a highly available infrastructure when it gets deployed. Using the container API, we can collect all the container resource information like CPU, memory, network utilization, etc. These metrics are tracked using different monitoring tools and analyzed in real-time for better performance. It also helps to understand the lifecycle policy of the containers.


We can monitor the application performance by understanding the characteristics of containers, pods, service, network, and overall clusters status using the API metrics. By scraping these metrics data to the advanced event monitoring dashboards like Grafana, we can virtualize the KPIs for an application. Moreover, dashboards like these, that read data from the Prometheus metrics can play an instrumental role in data analytics, cost optimization, and can see through the complexity we manage every day, which leads to improving the user experience.


  • Create labels and tags based on the applications.
  • Configure the alerts based on the cluster capacity and resource utilization and push the notification through proper channels. 


Spring Boot Properties using Actuator for Metrics API:





management.metrics.distribution.percentiles[http.server.requests]=0.50, 0.75, 0.95, 0.99


Sample Grafana Dashboard



Logging and Storage


For a multi-container environment, you would have to aggregate logs. This is possible with docker logging drivers, shippers, and then sending logs to visualization/search tool for logs. Docker demon provides a log driver as a plugin, by default it is a JSON file handler. Due to several limitations, you wouldn’t want to use the default driver in a production setup. The ideal setup would be to use Fluentd Driver, which would also help us shipping logs to the eventual visualization tool.


A typical open source stack would use Fluentd with ELK and create separate indexes for infra, container, and application logs for proper visualization. There are several other plugins that are commercial or open-source, which may be better suited for different requirements. 


Once you have logs, you will often worry about persistence storage. The ideal situation should use inexpensive but persistent storage like magnetic block storage. The persistent volume will help to achieve high availability fault tolerance and consistency. Based on the LifeCycle Policy, the logs can be categorized and archived in cloud storage for future analysis. 




In a nutshell, monitoring the containerized applications for KPIs, health, and facilitating the proper logging and storage, is as important as, containerizing and deploying them. With the help of apps and methodologies described in this blog, it becomes a lot easier to manage and monitor your containerized applications, prepare for any contingencies beforehand, and make the application more reliable.

Read similar blogs

Need Tech Advice?
Contact our cloud experts

Need Tech Advice?
Contact our cloud experts

Contact Us

PHP Code Snippets Powered By :