While designing and working on a bunch of React applications I always wanted myself and my team to write unit tests. I was able to learn, write and improve on how we have been writing test cases as a team.


Well, there isn’t any standard way or a rule book on how to and what to unit test with React and Redux, therefore this blog is going to talk about a few patterns that I have started using to unit test my applications. My examples in this document will cover the breadth of how to write unit tests.

What we’ll cover by the end of this series

Important libraries with React for Unit Tests


Jest is the official Facebook written library for testing react apps, though there are other libraries that help us unit test react, this stands for the following reasons.

There are many more to it, and you may read about it here.


The enzyme is a jQuery type framework on top of the react written by Airbnb. The enzyme helps you traverse, assert, and manipulate react components. The most important is that it helps you render components at either a shallow level or a deep level while running tests. Again, you may spend time reading about it here.

More about setting up your project with Jest and Enzyme can be learned from the Setting up a section of this article.

Most of how I have described how to write Unit Tests in this series has been inspired by reading this article.

Configuring code coverage

Jest carries code coverage as a part of its huge list of utilities. We only have to configure Jest to generate code review like the way we want it to perform. I have a basic setup for getting the code coverage set up, while a documentation on how to set up in detail can be read here.

"jest": {
  "collectCoverage": true,
  "collectCoverageFrom": [
  "coverageDirectory": "/coverage/",
  "coveragePathIgnorePatterns": ["/build/", "/node_modules/"],
  "setupFiles": [
  "testPathIgnorePatterns": [
  "testEnvironment": node,
  "testURL": "http://localhost",
  "transform": {
    "^.+\\.(js|jsx)$": "/node_modules/babel_jest",
    "^.+\\.css$": "/config/jest/cssTransform.js",
    "^(?!.*\\.(js|jsx|css|json)$)": "/config/jest/fileTransform.js"
  "transformIgnorePatterns": [
  "moduleNameMapper": {
    "^react_native$": "react-native-web"


What to test in a React Component

The first thing that ran in my mind when I decided to write Unit Tests is to decide on what to test in this whole component. Based on my research and trials, here is an idea of what you would want to test in your component. Having said that, this is not a rule, your use case may want to test a few aspects out of this list as well.



 On a general note, if we are Unit Testing we must be performing tests on a dumb component(simple react component) and not a connected component(component connected to the redux store).


A component connected to the store can be tested both as connected component and dumb component, to do this we have to export the component in our definition as a non-default. Testing connected components is generally an Integration Test

Test what the component renders by itself and not child behavior

It’s important to test all the direct elements that the component renders, at times it might be nothing. Also, Ensure to test the elements rendered by the component that is not dependent on the props passed to the component. This is why we recommend shallow rendering.

Test the behavior of the component by modifying the props

Every component receives props, and the props are sometimes the deciding attributes of how the component would render or interact. Your test cases can pass different props and test the behavior of the component.

Test user interactions, thus testing component internal methods

Components are bound to have user interactions and these user interactions are either handled with the help of props or methods that are internal to the component. Testing these interactions and thereby the components of private methods are essential.

What not to test in a React Component

What to test was simple, and probably something straightforward. We need to be well aware of what not to be tested in a React Component as a part of the Unit Tests.

Do not test PropTypes and Library functions

It does not make sense to test library functions, and those functionalities that you this should be tested as a part of the framework.

Do not test style attributes

Styles for a component tend to change, it does not give any value testing the styles of a component and is not maintainable when styles change the test cases are bound to change.

Do not test default state or state internal to a component

It’s not important to test the state internal to the component, as this is inhibited by default and would get tested indirectly when we test user interactions and methods internal to the component.

Identifying the “Component Contract”

When we start writing test cases for a component, it would make it helpful to decide what to test and what not when we understand the “Component Contract”.

To explain how we identify the contract, let’s discuss with an example.

Consider the following page which is a component called referral jobs.

Component Contract

Component Contract

Here is the code snippet on how this component is written

export class ReferralJobs extends Component {
    constructor(props) {
        this.state ={pageNumber: 1, showReferDialog: false}
    componentDidMount() {
        let data = {"pageNumber": this.state.pageNumber, showReferDialog: false};
    searchJob = (data) => {
        let defaultData = {"pageNumber": this.state.pageNumber};
        if(data !== defaultData) {
    handleReferJobDialog = (Job_Posting_ID ,JobTitle) => {
        let currentState = this.state.showReferDialog;
        this.setState({showReferDialog: !currentState});
        this.setState({jobId: Job_Posting_ID});
        this.setState({jobTitle: JobTitle});
    referJob = (data) => {
    render() {

Suggested jobs to refer

{this.props.jobs.tPostingJobLists && this.props.jobs.tPostingJobLists.map((job,index)=>{return


{job.MinExperience} – {job.MaxExperience} Years



this.handleReferJobDialog(job.Job_Posting_ID, job.JobTitle)}>Refer a Friend


) } } const mapDispatchToProps = (dipatch, ownProps) => { return bindActionCreators({ getReferralJobs: getReferralJobsAction, referAFriend: getReferAFriendAction }, dispatch) const mapStateToProps = (state, ownProps) => { return { jobs: state.referralsState.ReferralJobs, } } export default connect(mapStateToProps,mapDispatchToProps)(ReferralJobs);

The component is composed of three parts Search, Job Posting Container, and The PortalApp Dialog. Let’s identify the contract for this component and also write test cases.

Search is Always Rendered

The Search Container is always rendered and is not conditional on any of the props, and the search container accepts a prop as its click handler.

This can be divided into two pieces

Let’s write a couple of test cases for the same

describe ("ReferralJobs", () => {
         let props;
         let mountedReferralJobs;
         const referralJobs = () => {
         if (!mountedReferralJobs) {
             mountedReferralJobs = shallow();
     return mountedReferralJobs;
  const referralJobsMounted = () => {
     if (!mountedReferralJobs) {
         mountedReferralJobs = mount();
  return mountedReferralJobs;
beforeEach(() => {
    props = {
        jobs: {tPostingJobLists: []},
        getReferralJobs: jest.fn
    mountedReferralJobs = undefined;
it("Always renders a `ReferralSearch`", () => {
it("sets the rendered `ReferralSearch`'s `onClick` prop to the same value as `getReferralJobs`'", () => {

There are two test cases within a test suite in the above code snippet. Let’s first try to understand how the test suite is configured.

 Our describe method initializes a test suite and it can contain any number of test cases within it. It has various functions that it can handle such as after Each, beforeEach etc. read more here.

 Within our describe method we have two initializations one called referralJobs and the other as referralJobsMounted.  These are two ways in which we can render our component on the Virtual DOM during our testing.


Shallow Rendering

Shallow rendering is a widely used form of rendering when writing Unit Tests with Enzyme, this renders the component at one level deep and does not care about the behavior of child components. This is used when you are bothered about what is rendered in the component of interest.


Mount is a form of rendering which renders the complete component on the Virtual DOM and also returns an instance of the component. This helps us in cases where we need to test the component level props.

In our above snippet, we are also passing default props to the component so that the ReferralJobs component renders successfully.

Moving to our test cases, we have two test cases like our identified contract, one to verify that the search component is successfully rendered and other to verify that the props set in the component is the prop that we had given it while invoking the component.

As you notice, we use Jasmine’s expect the library to make assertions and this comes built in with Jest.

We pass jobs as props and ‘n’ jobs to be rendered as Paper components

The main functionality of the component is to display jobs that are given to it as props. This can again be separated into two parts.

To test this we have two test cases, and we pass a two job object as props in our test case.

describe("when `jobs` is passed", () => {
    beforeEach(() => {
        props.jobs = {
            "tPostingJobLists": [
              "JobTitle":"Java develper -Test",
              "Skill_Required":"java oracle",
              "State":"Arunachal Pradesh",
              "ShortDesc":"test descriptuin test descriptuin test descriptuin v test descriptuin test descriptuin test descriptuin vtest descriptuin test descriptuin test descri....",
              "JobTitle":"Java develper -Test",
              "Skill_Required":"java oracle",
              "State":"Arunachal Pradesh",
              "ShortDesc":"test descriptuin test descriptuin test descriptuin v test descriptuin test descriptuin test descriptuin vtest descriptuin test descriptuin test descri....",
* Tests that the count of Grids rendered is equal to the count of jobs
* in the props
it("Displays job cards in the `Grid`", () => {
    const wrappingGrid = referralJobs().find(Grid).first();
* Tests that the first job paper has rendered the right
* job title

On clicking Refer a Friend component state must change

In each job card, we have referred a friend link, on clicking this link the click handler changes the state and a dialog opens.

If you read the component code for ReferralJobs, the dialog has a prop value which is referred to the component state. When this value is true, the dialog is expected to open.

We will need to test if when clicking refer a friend link, the state changes and test whether the dialog prop is set to true or not.

* Test that clicking on refer a friend changes the state of the portal app dialog
* The ReferJob pop up cannot be tested as Enzyme's mount method does not
* render inner components
* This illustrates how we can test component level methods
it("opens a `PortalAppDialog` and shows `ReferJob` on clicking on refer a friend link", () => {
    const firstJob = referralJobs().find(Grid).first().find(Paper).first();
    const referAFriend = firstJob.find('.details-link');
    const referDialogBeforeClick = referralJobs().find(PortalappDialog);
    const referLink = referAFriend.first();
    const referDialogAfterClick = referralJobs().find(PortalappDialog);

In the above snippet we have introduced the simulate functionality. The simulate is a function of enzyme that helps you trigger events on elements.


In our ReferralJobs component code, the click handler for the refers a friend link is an internal component method. Thus, by performing this test we are also testing the internal private method of the component.

Components form the core of a react application. I hope this would have helped you understand how and what to unit test in a component of a react application.

Testing Redux Asynchronous action creators

Assuming that you know what action creators are, let me move to asynchronous action creators. Your actions may make asynchronous HTTP requests to get data, and testing them is important. Action creators are responsible for dispatching the right actions to the redux store.

In our ReferralJobs application, we have used axios to make HTTP requests. You may alternatively use any library(ex: fetch) to make your HTTP requests. In order to test this action creator, we will have to mock the HTTP request and the response. Also, we need to mock the redux store to handle the actions.
For this, we have used redux-mock-store to mock the store and axios-mock-adapter to mock our axios HTTP requests.(If you are using anything apart from axios to perform HTTP request, consider using nock).

const middlewares = [thunk];
const mockStore = configureMockStore(middlewares);
axios.defaults.baseURL = 'https://appst.portalapp.com/';
const mock = new MockAdapter(axios);

In our test case, we are simply using the libraries and setting up the mocks. We will create a mock store for testing the action creators.

const middlewares = [thunk];
const mockStore = configureMockStore(middlewares);
axios.defaults.baseURL = 'https://appst.portalapp.com/';
const mock = new MockAdapter(axios);
* Test Suite to test Asynchronous actions
* This test case mocks the API request made by axios with the help of
describe('Testing Async Actions', () => {
    afterEach(() => {
    it('creates GET_REFERRAL_JOBS when fetching referral jobs has been done', done => {
        mock.onPost(getReferralJobsURL).reply(200, { data: { todos: ['do something'] } });
        const expectedActions = [
            {"type":"GET_REFERRAL_JOBS","payload":{"todos":["do something"]}},
        const store = mockStore({ todos: [], loginState: {accessToken: "sampleAccessToken" }})
         * Setting timeout as axios and axios-mock-adapter don't work great
         * We should move from using axios and use fetch if we don't have a specific reason using
        setTimeout(() => {
        }, 1000)

Once our mockStore is ready, in our test case we mock the HTTP request that is expected to be made. This also gives it a sample response.


When an action creator is called, it is expected to dispatch actions to the store and this is a known object defined as expectedActions in our test case.


We then dispatch the action creator into our store with empty data, now the action creator is expected to make a HTTP request and dispatch necessary actions into the store.


We are testing this post the timeout as the http request is an asynchronous call(this is a hack for getting it working on axios-mock-adapter, if you are using nock the timeout is not required)

Testing Redux Reducers

Testing reducers are straightforward and are not dependent on any setup. Any reducer, when called with an action type, must modify the state and return the state back. Hence we only have to test a state object.


In our test case, we are passing a payload to the action type and we expect the state to change with the appropriate payload.


Redux handles the state of data flow within a react application, writing Unit Tests for its action creators and reducers would bring in a lot of value to your application’s quality. I hope this article helped you.

We, as product development enthusiasts and owners build software every day, be it as a part of our service offering to clients or just another product of our own. The primary intent is to build enterprise software that can solve problems or make work easier. The need for the product or the use case it tries to address is fundamental to the list of features the product is built with.

Having had experience in building many software products in the past, and from having had the opportunity to have seen them move through the product development lifecycle(Development, Deployment, Support), here are a few key considerations that are very important from a design and development point of view for your product to become an enterprise-level software product apart from its key features.

Building Enterprise Software

I have tried to cover points that have had an impact and are often missed out or ignored during product development.


Very often we have a database associated with our enterprise application, and generally, we have more than one table to handle our operations. Following are some key considerations when designing a database.

Audit Columns

It’s important and is a standard practice to include audit columns in every table in your database. Audit columns would comprise of created_at, created_by, updated_at, updated_by columns, and it’s necessary that these fields are updated at the right course of action.
Audit columns help track when a record in the database was created/ updated and by whom. This requirement may not be needed for any feature in your application, however, it may help you with debugging and history.

Schema Change Document

Database structure can change very frequently and to maintain the change across different deployment environments is a tedious task. It’s helpful to keep a track of all the schema updates and changes in the form of DDL queries by versioning them according to your releases.
If you have an ORM in place, the ORM should be capable of handling the schema updates, however, ensure your ORM validates your schema at every build.
By doing this, deploying to any environment would be a seamless task.

Look up value table

Enterprise software products have a ton of features built and are supposed to serve as an end to end business use case. Business use cases would have various configuration parameters that tend to change and it is not right to keep them in the code. As developers, we would tend to keep them in a property file and load them an application restart. This may not be feasible as you may have to relaunch the app with a new property file every time there is a change.
It has been a standard and a best practice to have a single normalized table ideally “look_up_value” table that carries a list of configuration parameters to serve multiple features across the application. To be better efficient in terms of performance, the values in this table can be kept in a cache and the cache can be refreshed at frequent intervals depending on the use case. Ex: Email Recipient List, Scheduling cron intervals, File Location parameters etc.

Server-Side Application

I and my team have built great server-side apps, and have solved complex problems in the past. Here are a few things that we tend to miss, and that is very important.

Authentication and Authorization

When building an enterprise software product Authentication and Authorization are mandatory. Your requirement may not explicitly mention both, while during the course of the development and user testing, eight out of ten times both authentication and authorization would be required. Hence it is better to have set up the authentication and authorization setup done early.
Most of the enterprises these days stick to a Single Sign-On(SSO) ex: LDAP, CAS etc. It’s important to identify the right authentication model for your project in advance.


Logging is an important feature of any enterprise software product. Your project may not demand to log, however, logging is key for debugging issues and tracking history.

Application level Logging

Application level logging is a must for any enterprise software product, there are various frameworks and libraries built to help easily log into file and database. This helps us track error conditions, and debug issues. These logs can be monitored with the help of a log monitoring tool that can provide more insights.
While logging, not mentioning the right log level, can impact your project in various instances. Simple mentioning INFO for all types of logging may incur a performance and a space constraint. Hence it’s important to understand the different log levels and mention them accordingly.

Process Level Logging

Any process or task in an application that deals with a large set of data need process level logging. A process such as file uploads’, a transformation of data etc. that may insert a large set of records into the database and may also update a few need process logs.
Process logs are crucial in tracking status, impact and other monitoring aspects of a process. It also helps with audit logs.
It’s a best practice to link log transactions ID with corresponding records that were inserted into the database.

Exceptions in an Enterprise Product

Often ignored or left unhandled, every use case or a logical code entity is expected to handle exceptions. Exceptions may occur not only due to the code that a developer writes, its more with the use case and the business environment that a product is a part of.
Let’s take our file upload example, during the course of uploading the contents of a file which may take some time(based on the size of the file), what would happen if the database instance goes down.
Exception handling needs a more in-depth understanding of the bigger picture of the product. One needs to understand the actual use of the product, users, and the business criticality around every feature and process in the product.
The technical design of the project must and should include exception handling as one of its core aspects.

Handling exceptions

Exceptions that are sensibly caught need to be handled well, there are various ways in which you can handle, and again that way you choose to handle depends on the use case, and the business value of the feature and the criticality of the exception.

Log and Continue

If the exception is not critical and is worth ignoring and continuing with the rest of the process, it’s good to catch and log the exception at the application level as an error and continue.

Throw and Propagate

When an exception is expected to be critical and you want to stop the executing task, it’s a must to throw the right exception to the higher level and let it propagate so that the right layer can catch it handle it properly.

Handling Database Transactions

When handling exceptions we tend to miss out database transactions and hence may lose data integrity. For example, when you are writing to a table A in the database that has a foreign key reference to another table B, an exception occurs before updating the reference table, in this case, one must revert the transaction made on the table A as B was never updated resulting in data integrity.
Latest technologies have various easier ways to handle transactions and it’s recommended that you incorporate one of them to avoid any sort of data integrity issues in your product.
Handling exceptions are not as easy as we discuss, it needs a good architectural design. Hence identifying the right way to handle exceptions at the start of the project is important.

Performance and Security Benchmark

Performance and Security constraints play a major role when getting your product deployed to production. I have noticed that in general developers tend to miss outperformance and security checks at the initial phase and its at the flag end when the team identifies these issues.
Out of experience, I have had hard lessons on performance and security parameters that my team has missed out during the nascent phase of the project and have tried to fix the same when it has a negative impact on the product.
As a part of the design, one must consider, document and set up a benchmark and process for all aspects of application performance and security.
Also, it’s good to have performance checks and security audits done at frequent intervals for an enterprise software product.

Data Processing

Most of the enterprise software products if not all do have some portion of data processing to be done as a part of the product’s core functionality. Data processing may be anything similar to an ETL, File uploads, etc. Data processing may include simple data dump or validation and transformation of data between source to destinations. Following are a few best practices that I would recommend out of my experience in dealing with enterprise products and customers.

Using Database Temp Tables Effectively

Source for a data processing task should ideally be a feed. A feed could be a file or a request, it’s a best practice to initially load the contents of a feed into a temp table in the database.
Database engines are super powerful, instead of having to have our feed data in memory and performing validations, it’s easy, performance oriented and debug friendly to have the data populated into a temp table in the database to use the data for validations and transformations.

Processing File Uploads

Processing files that are uploaded into the product may seem simple. The complexity may lie in the crux of the logic, however at a high level handling the files should also be following a few best practices.

Job Scheduling

Scheduling jobs have been a traditional task and have been in the industry for over decades. Scheduled jobs at present play a vital role in an enterprise software product and it’s important we follow a few best practices with respect to it.
Scheduling is an industry standard requirement, hence there are mature scheduling tools and libraries available in the market that can help easy adoption and effective implementation of scheduling.

Jobs to be scheduled on application restart

Scheduling of jobs shouldn’t be a task of someone else, your application must be capable enough of scheduling the jobs based on some configurations from some source. Hence we must design our application in a manner to schedule the jobs on its own at application restart.

Re-run unexpectedly terminated jobs

When a job is triggered by the scheduler there may be multiple reasons on why a job could terminate even before it could complete. If the termination is expected and is due to some business case, then this must be handled well.
However, when there is a case when a job is terminated due to unexpected reasons such as system restart etc. the application must be capable enough to handle the case by either reverting the partially completed tasks and re-running the whole task or to have a provision to resume from where it terminated.

Daisy chaining jobs

In an enterprise application there may be multiple scheduled jobs, at certain times there may be a dependency that the execution of one job is dependent on the execution of another. This is a typical case for a daisy chain. Chaining of jobs helps you tie a job to another job. The execution of a job can be based on the success or failure of another job.
This is an important feature that developers generally are unaware of. Daisy chaining is a feature in many know scheduling tools and they have been built to handle various use cases and edge cases. I would recommend using a well known mature scheduling tool for such use cases.
This isn’t all of it, I will add a few more parts to this blog going forward with more and essential best practices for building a software product as I evolve myself as a techie.

I would also love to hear from you on any of the other best practices that I have missed here, or any other feedback related to the blog

“Hackathon!” – A garage of techies fueled with ideas and technologies, hit their peaks of innovation in a span of 24 Hours. Preparations for the event take place for over a Month, with participants forming a team, working on their initial plan, devising gadgets and tools with a mindset to learn, build and have fun.


HashedIn has been instrumental in nurturing innovation and help its techies cultivate the sense of learning new technologies since the last 2 years. Hashathon(Hackathon @ Hashedin) has lit its torch 4 times earlier paving way for more than 50 cool products in the past that range across different technology verticals to help propose innovative and whacky technical ideas with tremendous market value.


The event primarily focuses on ideation, innovation, and implementation. And the by-product of all this is crazy learning, fun, competition, and FOOD!


Hashathon 5.0

The Hashathon committee not only just organizes the event for you but also helps you with a team, throws out a bucket full of ideas on the board so that you get to understand the current swing and either you choose an idea or come up with one of your own. The environment around you is insane with scribbled whiteboards, crashed Red Bull cans, zombie faces that have stayed awake round the clock and some polite music. Participants have always bagged home super fun goodies, and the winners have had a jackpot.


Mentoring has been key to the success of Hashathon. The event is just not that one day, it spans over a week prior to the day. The teams are asked to pitch in ideas and come up with a high-level architectural design on how the product is going to be built. With great architects and tech guru’s around, teams get mentored before the event on what to focus and what to keep for the last. During the sprint of 24 Hours, the mentors visit teams at checkpoints, guide them on technology, implementation, and presentations. Trust me, this is something that is beyond the competition that every participant gets – “An opportunity to be mentored by the best in class”.


There is no end to motivation during the event. Judges for the event are generally CXO’s of various organizations and startups across the industry. The insight they provide on each and every product is incomparable. There is a lot to take from every talk and discussion during the final presentation. And it’s amazingly educative to see various ideas and products built on different technologies put across for evaluation and debate.


Though technology has been the heart of the event, there has been acknowledgement towards design and art in the past. Hashathon has seen miniatures, quilling, painting, and other sorts of artistic projects and is always open to support such activities for the non-techies.


A commendable list of web apps, tools, libraries, and gadgets have been built with technologies like Machine Learning, IoT, Artificial Intelligence, Beacon technology and have been deployed to the cloud.
Hashathon has always motivated Hashers and has kept them excited from the day of announcement of the event, until it’s judgement day. All the techies are now geared up with their respective teams looking forward to the next.

It’s imperative for any tech organization, be it a startup or an MNC, to keep sailing on the tides of evolving technology and nurture the instinct of innovation.


At HashedIn, we organize regular Hashathon (Hackathon at HashedIn) to keep our techies boosted with innovation and to learn new technology. While Hackathons by the term may seem to be exciting, it needs ample experience to conduct a successful one.


Here are the 6 trump cards from the experience of organizing multiple successful Hashathons.

Trumpcard #1: Set the tone early and mark the calendar

Getting the mindset of a techie to prepare for a Hackathon is crucial, making it vital to inform and spread the word of the event as early as possible.


To guarantee a good level of participation, be sure to schedule the event considering the following:


Look into your calendar to identify a day with the least vacation plan – As in many cases, vacations are planned a lot earlier (thanks to booming air ticket fares), and not many would want to replan. Identify from the delivery team on crucial delivery milestones/releases. If the event overlaps then it would become a nightmare for both organizing and participation.



Trumpcard #2: Diversify your Theme

Yes, eyeing a theme for the hackathon is challenging, but super important! The theme should also be diverse enough to help participants get into a pool of ideas.


All projects/ innovations at the event may not always be converging to the specified theme. However, a theme could help streamline everyone’s thought process.


And importantly, before you could decide on a theme, ensure you discuss with groups of people in the organization to understand their perspective of expectations from the event.

Trumpcard #3: Engage people prior to the event

Though it’s an internal event and everyone is aware of it, not everyone has yet come up to the Hackathon swing. Advertising the event internally is key to success.


Few significant steps to let your participants get into the mood:
Emails with catchy subject lines Stick posters/flyers around. Identify the place where you would get max traction Catch hold of groups during Lunch and breaks Visit individual team area and discuss Prepare your organizing team with a few innovative ideas and help participants form a team to work on an idea.

Trumpcard #4: Merchandise Goodies

After weeks of preparation and a whole night of brainstorming, participants would cherish to take back home something along with memories.


It’s absolutely pleasant to give away cool merchandise as goodies to all participants. This would ensure bonding between the employee and the organization gets stronger.


Though it’s an internal event and everyone is aware of it, not everyone has yet come up to the Hackathon swing. Advertising the event internally is key to success. Few significant steps to let your participants get into the mood:
Emails with catchy subject lines Stick posters/flyers around.


Identify the place where you would get max traction Catch hold of groups during Lunch and breaks Visit individual team area and discuss Prepare your organizing team with a few innovative ideas, and help participants form a team to work on an idea

Trump card #5: Food and Energy Drinks

Food is a good friend to all, feed your participants at regular intervals throughout the event. Good food and refrigerated energy drinks would get your participants energized and active throughout the event.
Also, if your budget allows you to, then plan for a good dinner to wind up the event.


I would say this is vital as it creates a post-event environment for participants to gather and discuss the completed event. It’s gonna be fun, trust me.


Though it’s an internal event and everyone is aware of it, not everyone has yet come up to the Hackathon swing. Advertising the event internally is key to success. Few significant steps to let your participants get into the mood:
Emails with catchy subject lines Stick posters/flyers around.



Identify the place where you would get max traction Catch hold of groups during Lunch and breaks Visit individual team area and discuss Prepare your organizing team with a few innovative ideas and help participants form a team to work on an idea.

Trumpcard #6: Just before Show Time!

All teams have put in efforts, and not everyone would have got to where they had planned for at the start. It would be worthwhile spending a couple of minutes with each team just before the end of the Hackathon to identify tenacity in their projects and help each one of them prepare for show time.


Feedback is vital since this is an internal Hackathon, it would be appreciated if each of your team is given a turn to present to the judges and receive feedback.



Did I miss out on mentioning about Prizes? – Hackathons are known for this. Yes, It’s integral to extol the winners with jaw-dropping stuff. Never miss out on this.