12 Apr 2018
Technical Best Practices To Consider When Building an Enterprise Software Product
Kousick S
#Technology | 11 min read
Technical Best Practices To Consider When Building an Enterprise Software Product
Kousick S

We, as product development enthusiasts and owners build software every day, be it as a part of our service offering to clients or just another product of our own. The primary intent is to build enterprise software that can solve problems or make work easier. The need for the product or the use case it tries to address is fundamental to the list of features the product is built with.

Having had experience in building many software products in the past, and from having had the opportunity to have seen them move through the product development lifecycle(Development, Deployment, Support), here are a few key considerations that are very important from a design and development point of view for your product to become an enterprise-level software product apart from its key features.

Building Enterprise Software

I have tried to cover points that have had an impact and are often missed out or ignored during product development.


Very often we have a database associated with our enterprise application, and generally, we have more than one table to handle our operations. Following are some key considerations when designing a database.

Audit Columns

It’s important and is a standard practice to include audit columns in every table in your database. Audit columns would comprise of created_at, created_by, updated_at, updated_by columns, and it’s necessary that these fields are updated at the right course of action.
Audit columns help track when a record in the database was created/ updated and by whom. This requirement may not be needed for any feature in your application, however, it may help you with debugging and history.

Schema Change Document

Database structure can change very frequently and to maintain the change across different deployment environments is a tedious task. It’s helpful to keep a track of all the schema updates and changes in the form of DDL queries by versioning them according to your releases.
If you have an ORM in place, the ORM should be capable of handling the schema updates, however, ensure your ORM validates your schema at every build.
By doing this, deploying to any environment would be a seamless task.

Look up value table

Enterprise software products have a ton of features built and are supposed to serve as an end to end business use case. Business use cases would have various configuration parameters that tend to change and it is not right to keep them in the code. As developers, we would tend to keep them in a property file and load them an application restart. This may not be feasible as you may have to relaunch the app with a new property file every time there is a change.
It has been a standard and a best practice to have a single normalized table ideally “look_up_value” table that carries a list of configuration parameters to serve multiple features across the application. To be better efficient in terms of performance, the values in this table can be kept in a cache and the cache can be refreshed at frequent intervals depending on the use case. Ex: Email Recipient List, Scheduling cron intervals, File Location parameters etc.

Server-Side Application

I and my team have built great server-side apps, and have solved complex problems in the past. Here are a few things that we tend to miss, and that is very important.

Authentication and Authorization

When building an enterprise software product Authentication and Authorization are mandatory. Your requirement may not explicitly mention both, while during the course of the development and user testing, eight out of ten times both authentication and authorization would be required. Hence it is better to have set up the authentication and authorization setup done early.
Most of the enterprises these days stick to a Single Sign-On(SSO) ex: LDAP, CAS etc. It’s important to identify the right authentication model for your project in advance.


Logging is an important feature of any enterprise software product. Your project may not demand to log, however, logging is key for debugging issues and tracking history.

Application level Logging

Application level logging is a must for any enterprise software product, there are various frameworks and libraries built to help easily log into file and database. This helps us track error conditions, and debug issues. These logs can be monitored with the help of a log monitoring tool that can provide more insights.
While logging, not mentioning the right log level, can impact your project in various instances. Simple mentioning INFO for all types of logging may incur a performance and a space constraint. Hence it’s important to understand the different log levels and mention them accordingly.

Process Level Logging

Any process or task in an application that deals with a large set of data need process level logging. A process such as file uploads’, a transformation of data etc. that may insert a large set of records into the database and may also update a few need process logs.
Process logs are crucial in tracking status, impact and other monitoring aspects of a process. It also helps with audit logs.
It’s a best practice to link log transactions ID with corresponding records that were inserted into the database.

Exceptions in an Enterprise Product

Often ignored or left unhandled, every use case or a logical code entity is expected to handle exceptions. Exceptions may occur not only due to the code that a developer writes, its more with the use case and the business environment that a product is a part of.
Let’s take our file upload example, during the course of uploading the contents of a file which may take some time(based on the size of the file), what would happen if the database instance goes down.
Exception handling needs a more in-depth understanding of the bigger picture of the product. One needs to understand the actual use of the product, users, and the business criticality around every feature and process in the product.
The technical design of the project must and should include exception handling as one of its core aspects.

Handling exceptions

Exceptions that are sensibly caught need to be handled well, there are various ways in which you can handle, and again that way you choose to handle depends on the use case, and the business value of the feature and the criticality of the exception.

Log and Continue

If the exception is not critical and is worth ignoring and continuing with the rest of the process, it’s good to catch and log the exception at the application level as an error and continue.

Throw and Propagate

When an exception is expected to be critical and you want to stop the executing task, it’s a must to throw the right exception to the higher level and let it propagate so that the right layer can catch it handle it properly.

Handling Database Transactions

When handling exceptions we tend to miss out database transactions and hence may lose data integrity. For example, when you are writing to a table A in the database that has a foreign key reference to another table B, an exception occurs before updating the reference table, in this case, one must revert the transaction made on the table A as B was never updated resulting in data integrity.
Latest technologies have various easier ways to handle transactions and it’s recommended that you incorporate one of them to avoid any sort of data integrity issues in your product.
Handling exceptions are not as easy as we discuss, it needs a good architectural design. Hence identifying the right way to handle exceptions at the start of the project is important.

Performance and Security Benchmark

Performance and Security constraints play a major role when getting your product deployed to production. I have noticed that in general developers tend to miss outperformance and security checks at the initial phase and its at the flag end when the team identifies these issues.
Out of experience, I have had hard lessons on performance and security parameters that my team has missed out during the nascent phase of the project and have tried to fix the same when it has a negative impact on the product.
As a part of the design, one must consider, document and set up a benchmark and process for all aspects of application performance and security.
Also, it’s good to have performance checks and security audits done at frequent intervals for an enterprise software product.

Data Processing

Most of the enterprise software products if not all do have some portion of data processing to be done as a part of the product’s core functionality. Data processing may be anything similar to an ETL, File uploads, etc. Data processing may include simple data dump or validation and transformation of data between source to destinations. Following are a few best practices that I would recommend out of my experience in dealing with enterprise products and customers.

Using Database Temp Tables Effectively

Source for a data processing task should ideally be a feed. A feed could be a file or a request, it’s a best practice to initially load the contents of a feed into a temp table in the database.
Database engines are super powerful, instead of having to have our feed data in memory and performing validations, it’s easy, performance oriented and debug friendly to have the data populated into a temp table in the database to use the data for validations and transformations.

Processing File Uploads

Processing files that are uploaded into the product may seem simple. The complexity may lie in the crux of the logic, however at a high level handling the files should also be following a few best practices.

  • When validating a file upload, it should always be an accept completely or reject completely. If there is a validation error even with some portion of the file it’s better to reject the file completely instead of taking in partial data
  • File uploads should always maintain a log for all the uploads, including status, the time taken for upload, and other parameters that can provide an impact. A log should be maintained for failed file uploads too.
  • Always map log record id’s for the records inserted into the database from the file. It would really be helpful to track from where the data came from in the future.
  • It’s a good practice to accept the file store it in a location and then process, instead of having to accept it as a stream.

Job Scheduling

Scheduling jobs have been a traditional task and have been in the industry for over decades. Scheduled jobs at present play a vital role in an enterprise software product and it’s important we follow a few best practices with respect to it.
Scheduling is an industry standard requirement, hence there are mature scheduling tools and libraries available in the market that can help easy adoption and effective implementation of scheduling.

Jobs to be scheduled on application restart

Scheduling of jobs shouldn’t be a task of someone else, your application must be capable enough of scheduling the jobs based on some configurations from some source. Hence we must design our application in a manner to schedule the jobs on its own at application restart.

Re-run unexpectedly terminated jobs

When a job is triggered by the scheduler there may be multiple reasons on why a job could terminate even before it could complete. If the termination is expected and is due to some business case, then this must be handled well.
However, when there is a case when a job is terminated due to unexpected reasons such as system restart etc. the application must be capable enough to handle the case by either reverting the partially completed tasks and re-running the whole task or to have a provision to resume from where it terminated.

Daisy chaining jobs

In an enterprise application there may be multiple scheduled jobs, at certain times there may be a dependency that the execution of one job is dependent on the execution of another. This is a typical case for a daisy chain. Chaining of jobs helps you tie a job to another job. The execution of a job can be based on the success or failure of another job.
This is an important feature that developers generally are unaware of. Daisy chaining is a feature in many know scheduling tools and they have been built to handle various use cases and edge cases. I would recommend using a well known mature scheduling tool for such use cases.
This isn’t all of it, I will add a few more parts to this blog going forward with more and essential best practices for building a software product as I evolve myself as a techie.

I would also love to hear from you on any of the other best practices that I have missed here, or any other feedback related to the blog

Read similar blogs

Need Tech Advice?
Contact our cloud experts

Need Tech Advice?
Contact our cloud experts

Contact Us

PHP Code Snippets Powered By :