Manish Dave
#Technology | 6 Min Read
The COVID-19 pandemic has significantly accelerated the pace at which companies are migrating their application workloads to the cloud. To keep up with the increasingly digitally-enabled world, companies of all sizes are evolving the way they work, to drive agile product delivery and innovation.
Cloud-native technologies are positioned to be the key aspect that helps companies to meet the challenges of the digital-focused marketplace. It comes as a perfect solution for IT organizations looking to deploy resilient and flexible applications that can be managed seamlessly from anywhere.
Growth of Cloud-Native
Companies have gradually realized the need to move important components of their business infrastructure to the cloud. The business challenges cropping up due to the restrictions put in place for the COVID-19 pandemic have considerably reinforced this need. While many companies have opted to migrate their legacy systems to easy-to-manage and cost-effective cloud platforms, many are choosing to develop cloud-native apps. Unlike the typical legacy applications that have to be migrated to the cloud, cloud-native ones are specially built for the cloud from day one. These apps tend to be deployed as a part of microservices and run in containers. They typically are managed with the usage of an agile methodology and DevOps.
There are several advantages a business can enjoy by going cloud-native, such as:

  • Faster deployment: To keep pace with the competition levels and meet the evolving needs of the customers, companies today have to keep innovating their apps frequently. In many scenarios, however, deploying new app features can be extremely cumbersome and complex, even for large companies having a skilled IT team. Cloud-native computing makes this process much simpler. Being based on microservices, this system facilitates the deployment of new apps and features at a rapid pace. It allows developers to deploy new and advanced features within a day, without having to deal with any dependencies.
  • Wider reach: Cloud-native technologies help companies to reach out to the global audience, expand their market reach, and increase their prospects. Low-latency solutions enjoyed by companies by going cloud-native allow them to seamlessly integrate their distributed applications. For modern video streaming and live streaming requirements, low latency is extremely important. This is where edge computing and Content Delivery Networks (CDNs) additionally shine, and brings data storage and computation closer to users.
  • Leverage bleeding-edge technologies: Bleeding-edge services and APIs that were once available only to businesses having expansive resources are now made available to almost every cloud subscriber at nominal rates. These all-new cloud-native technologies are built to first cater to cloud-based systems and their discerning users.
  • Quick MVP: Due to the elasticity of the cloud, anyone can do an MVP/POC and check their products across geographies.
  • Leaner Teams: Owing to the reduced operational overheads, cloud teams are always leaner when it comes to dealing with cloud-native technologies.
  • Easy scalability: Most legacy systems depended on plugs, switches, and various other types of tangible components and hardware. Cloud-native computing, on the other hand, is wholly based on virtualized software, where everything happens on the cloud. Hence, the application performance is not affected if the hardware capacity is scaled down or up, in this system. By going cloud-native, companies need not invest in expensive processors or storage for servers to meet their scalability challenges. They can just opt to reallocate the resources and scale up and down. All of it is done seamlessly without impacting the end-users of the app.
  • Better agility: The reusability and modularity aspects of cloud-native computing makes it perfect for firms that are practicing agile and DevOps methodologies, to allow for frequent releases of new features and apps. Such companies also have to follow continuous delivery and integration (CI/CD) processes to launch new features. Provided that cloud-native computing makes use of microservices for deployment, developers can swiftly write and deploy code. It subsequently streamlines the deployment process, making the whole system much more efficient.
  • Low-code development: With low-code development, developers can shift their focus from distinctive low-value tasks to high-value ones that are better aligned to their business requirements. It allows the developers to create a frontend of discerning apps quite fast and streamline workflow designs with the aim of accelerating the go-to-market time.
  • Saves cost: Technologies that are a perfect fit for cloud-native:
Technologies that are a perfect fit for cloud-native:

  • K8s: The declarative, API-driven infrastructure of Kubernetes, tends to empower teams to independently operate while focusing on important business objectives. It helps development teams to enhance their productivity levels while reducing the complexities and time involved in app deployment. Kubernetes plays a key role in enabling companies to enjoy the best of cloud-native computing, and avail the prime benefits offered by this system.
  • Managed Kafka: With several Fortune 100 companies depending on Apache Kafka, this service has become quite entrenched and popular. As the cloud technology keeps expanding, a few chances are needed to make Apache Kafka truly cloud-native. Cloud-native infrastructures allow people to leverage the features of SaaS/ Serverless in their own self-managed infrastructure.
  • ElasticSearch: Elasticsearch is a distributed search and analytics system that allows complex search capabilities across numerous types of data. For larger cloud-native applications that have complex search requirements, Elasticsearch is available as a managed service in Azure.
  • ML/AI: The cloud enables firms to use managed ML to remove the burden on human resources. It reduces limitations in regards to data and ML, allowing all stakeholders to access the program and insights. AI takes the same approach on the cloud as machine learning but has a wider focus. Several firms today are able to deploy AI models and deep learning to the elastic and scalable environment of the cloud.

All companies, no matter their core business, have to embrace digital innovations to stay competitive and ensure their continued growth. Cloud-native technologies help companies enjoy an edge over their market competition and manage applications at scale and with high velocity. Firms previously constrained to quarterly deployments to important apps are now able to deploy safely several times in a day.

References
https://www.alibabacloud.com/blog/low-latency-global-cloud-solutions-on-alibaba-cloud_596309
https://platform9.com/blog/how-content-delivery-networks-cdns-can-use-kubernetes-at-the-edge-for-less-latency-and-better-livestream/
https://www.linkedin.com/pulse/four-bleeding-edge-cloud-technologies-right-your-sebastian-dolber
https://thenewstack.io/how-to-make-kafka-cloud-native/
https://docs.microsoft.com/en-us/dotnet/architecture/cloud-native/elastic-search-in-azure
https://medium.com/@ODSC/the-benefits-of-cloud-native-ml-and-ai-b88f6d71783
Pooja Joshi
#Business | 5 Min Read
Cloud-native architecture paves the way for the future of app development. This technology is characterized by its agile approach and helps companies to make the most out of cloud services that have emerged as a key driving factor behind digital transformation in modern industries. Apps developed with a cloud-native approach are playing a major role in transforming how diverse businesses operate, and deliver value to their customers.
Cloud computing has witnessed exceptional growth and development over the last decade, owing to many advantages offered by this system. It helps companies cut down on their expenses, provides superior flexibility, and helps boost their security and stability. More than 90% of enterprises already use a cloud service today, and the rate of cloud adoption is expected to consistently increase in the future.
However, several companies that have already adopted the cloud platform are now finding it difficult to identify new ways to deliver more innovation and value from their existing cloud strategies. A major reason behind this being that these firms are largely focused on application migration, which is the process of moving old apps and their functionality onto the cloud. While this system worked well in the initial stages, to meet the market requirements, businesses now need to unlock the optimal benefits of the cloud, to enjoy superior agility and lower IT expenses. The best way to do so would be to adopt a “cloud-native” approach, which involves developing applications and recognizing people and workflows, specifically as per the cloud.
What is meant by going cloud-native?
Cloud-native basically implies an approach of building services and applications for the cloud environment. Unlike their legacy counterparts, these apps are designed for the cloud right from day one. They can additionally be fixed and deployed much faster, and tend to have quite a fluid architecture. As a result, these applications can be placed and moved through diverse environments with ease.
Key considerations for companies going cloud-native
  • Cloud-Native Architecture – Monolith vs Microservices: The issue with monolithic architecture is that it takes a considerable effort to deploy any new features developed to production. Several teams are required to competently coordinate the code changes, and a lot of upfront integration and functional testing is needed ,to deploy several features at once. Microservices, on the other hand, empowers developers to deliver new features much faster to their discerning customers.
  • Management and monitoring: With microservices, monitoring solutions are required to manage more servers and services than ever before. In addition to having more objects to manage, cloud-native apps are also able to generate an expansive range of extra data that they have to keep track of. It can prove to be quite complicated to collect data from an environment that features so many moving parts.
  • Service integration: Typically featuring a set of disparate services, cloud-native applications have a distinctive disruptive nature that makes them highly flexible and scalable, in comparison to monoliths. However, this also implies that cloud-native workloads include way more moving pieces that have to be seamlessly connected together to achieve success. Effective service integration largely depends on selecting the right deployment techniques.
  • Cultural changes: Implementing cloud-native technology into an organization depends on its existing company culture. It is vital for internal teams to adopt cross-functional methods to make sure that various advanced softwares are iterated with a continuous cadence that complements the business goal of a firm. In many instances, making the actual switch to the cloud-native environment is the most straightforward part of the journey, while propagating those changes throughout an organization is the most challenging aspect.
  • Keep up with the latest cloud-native developments: It is important to embrace and implement the latest cloud-native developments to succeed in this digital world through optimizing processes to create new value for customers. The latest cloud-native developments also aid businesses to enjoy more flexibility and enjoy the benefits of cutting-edge technology.
  • Building cloud-native delivery pipeline apps: Cloud-native apps can run on private, public, on-premises, or hybrid environments. Several application delivery pipelines even today majorly run on the traditional on-premises environments that are still clunky when integrated with apps that run on containers and public clouds. The best way to overcome these challenges would be to move the CI/CD pipeline into a cloud environment, so as to mimic production conditions and bring pipelines closer to the apps. The deployment process becomes faster when the code is written closer to where it is deployed.
While going cloud-native is certainly advantageous, it is not that easy. However, by maintaining proper practices and implementing systematic strategies, one can unlock incredible scalability, reliability, and agility through this system. Ensuring proper management and monitoring of the environment, keeping up with the latest cloud-native developments, and building cloud-native delivery pipeline apps can be quite instrumental in this process.
Organizations can effectively improve their business processes with reduced costs, overheads, and manual efforts owing to the many advantageous features of cloud-native applications. These are portable, resilient, scalable and can be updated with ease, and hence can significantly aid companies to provide better experiences to their customers.
References
https://www.capgemini.com/2020/01/cloud-native-continues-to-grow-in-popularity-despite-misconceptions/
https://www.flexera.com/blog/industry-trends/trend-of-cloud-computing-2020/
https://grapeup.com/blog/cloud-native-applications-5-things-you-should-know-before-you-start/ https://dzone.com/articles/going-cloud-native-6-essential-things-you-need-to
Judelyn Gomes
#Business | 6 Min Read
Data warehouse (DW) modernization has become extremely vital for modern businesses. It ensures timely access to the analytics and data needed for businesses to operate smoothly. To facilitate smart decision making for practitioners, especially in the manufacturing industry, data warehousing for OLAP (Online Analytical Processing) applications are used to provide a distinctive edge.
The traditional DW systems are often unable to cope up with the requirements of contemporary industries, resulting in several pain points. DW modernization hence is needed to effectively and swiftly meet the ever-evolving business environments, and rapidly iterate cutting-edge solutions, to provide adequate support for new data sources. The manufacturing industry especially involves a host of complex processes and huge investments, and to ensure the best possible outcomes, it is necessary to modernize the data warehousing system.
Modern DW featuring cloud-built data architecture helps companies support their current and future data analytics workloads, irrespective of the scale. The high flexibility offered by popular cloud platforms like AWS, empowers businesses to carry out various processes with their data in real-time, and comes as a boon for many modern sectors, including manufacturing.
Data warehousing and the manufacturing
Over the years, several companies belonging to the manufacturing industry have started to use DW to improve their operations and deliver enterprise-level quality. Data and analytics allow manufacturing firms to stay competitive and cater to the current market needs. Reports, dashboards, and analytics tools are used by manufacturing professionals to extract insights from data, monitor operations and performance, and facilitate business decisions. These reports and tools are powered by DWs that efficiently store data to reduce the input and output (I/O) of data and deliver query results in a swift manner.
While transactional systems are used to gather and store detailed data in manufacturing firms, DWs caters to their analytics and decision-making requirements. Modern manufacturing companies ideally have systems in place to control individual machines and automate production. Online Transaction Processing (OLTP) is optimized in order to facilitate swift data collection with feedback required for direct machine control.
For real-time feedback to be meaningful, it is imperative to have historical context. An advanced data warehouse would be required to provide this context. Real-time systems can use this historical data from the DW, along with current process measurements, to offer feedback that facilitates real-time decision support.
Pain points of legacy data
Data warehouse modernization is required to address a host of organizational pain points. Business agility tends to be among the prime goals of modern age organizations as they move towards digitizing their operations which is extremely hard to achieve with legacy tools. The key aim of modernizing the data warehouse environment of firms would be to improve their analytics functions, productivity, speed, scale, as well as, economics.
Below are some of the challenges with legacy data warehouses,

  • Advanced analytics: Many organizations today are investing in online analytical processing or OLAP. However, with the legacy data warehousing environments, they are often unable to effectively use the advanced analytics forms, find new customer segments, efficiently leverage big data, and stay competitive.
  • Management: Legacy infrastructure is quite complex, and hence companies often have to keep investing in hiring professionals in order to effectively manage these outdated systems, even if they are not advancing agility or data strategy. This may incur high unnecessary expenses for a business, which can be cut down using a modernized system.
  • Support: There are many newer data sources that are not supported by the typical legacy data warehouses. These traditional solutions were not designed to handle the varying types of structured and unstructured data prevalent today, and hence can pose problems for many businesses. These issues can be solved by moving to more modernized technologies.
How can data warehouse modernization and AWS help?
DW modernization enables companies to be limitless when it comes to storing or managing data. They can scale gigabytes, terabytes, petabytes or even exabytes of data through these technologies. Through it, companies can easily scale their SQL analytics solutions.
Advanced data warehouses would enable companies to leverage the best possible practices to develop competitive business intelligence projects. DW modernization is especially useful in manufacturing and hi-tech industries, where broad diversity of data types and processing tends to be needed for the full range of reporting and analytics.
AWS and its APN Data and Analytics Competency Partners are renowned for offering a robust range of services for the implementation of the whole data warehousing workflow, including data warehousing analytics, data lake storage, and visualization of results. The data warehouse modernization platform of AWS facilitates faster insights at low expenses, ease of use with automated administration, high flexibility in analyzing both open and in-place formats, as well as, compliance and security for mission critical workloads.
AWS DW would help in:

  • Swiftly analyzing petabytes of data and providing superior cost efficiency, scalability, and performance.
  • Storing all data in an open format without having to move or transform it.
  • Running mission-critical workloads even in highly regulated industries like manufacturing.
Data comes from a host of sources in contemporary data infrastructures. It includes sensor and machine-generated data, CRM and ERP, web logs, and numerous other types of industry-specific data sources. The majority of companies, including the manufacturing industry, face a lot of difficulties in just storing and managing these increasing volumes and formats of data, let alone carrying out analytics to identify patterns and trends on it. To solve this issue, data warehouse modernization has become a necessity. It allows businesses to swiftly meet the rapidly changing business requirements, provide the needed support for new data sources and promptly iterates new solutions. There are many platforms and tools available today that can help businesses to modernize their data warehouse, AWS solutions being among the most competent ones.
References
https://support.sas.com/resources/papers/proceedings/proceedings/sugi24/Emerging/p142-24.pdf
https://info.fnts.com/blog/5-common-business-challenges-of-legacy-technology
https://cloud.google.com/blog/products/data-analytics/5-reasons-your-legacy-data-warehouse-wont-cut-it
https://tdwi.org/articles/2014/05/20/data-warehouse-modernization.aspx
https://www.vertica.com/solution/data-warehouse-modernization/
https://aws.amazon.com/partners/featured/data-and-analytics/data-warehouse-modernization/#:~:text=AWS%20provides%20a%20platform%20for,compliance%20for%20mission%20critical%20workloads
Pooja Joshi
#Business | 6 Min Read
According to research by the United Nations, the global population is expected to be 9.5 million by 2050, approximately a 2.2 billion increase from now. This implies that the demand for food will increase, and thus, crop production will also grow. This isn’t as simple as it sounds, because there are several hurdles that hamper agricultural supply, from food security to climate change. To overcome these challenges, we are seeing a shift towards technology and smart farming in the agricultural sector. Farming using diverse information and communication technologies is a new concept and the sector has shown immense benefits. Lately, leading agricultural ventures have been mobilizing the potential of cloud technologies to solve an array of problems related to farming. For instance, John Deere, a renowned farm equipment manufacturer, came up with cloud-based software Operations Center. This software successfully keeps track of a farm vehicle’s performance for quick and effective troubleshooting. In India, on the other hand, as reported by NASSCOM in 2019, there are more than 450 agri-tech start-ups, and they are growing at a rate of 25% annually. This sector’s potential is evident from the fact that it has received more than $248 million in funding.
Countering the pressures faced by the agricultural sector has become comparatively more straightforward with modern technologies like artificial intelligence, remote sensing, data analytics, GIS, blockchain, various Internet of Things (IoT) devices, and much more. These ensure more effective, productive, and prosperous farming practices. But among these, a data-enabled business model has been the most profitable as the collection of real-time data helps predict the prospects of farming practices. Reports suggest that the advocacy of data and analytics in agriculture has been growing consistently; the market size of global agriculture analytics is likely to increase from USD 585 million in 2018 to USD 1,236 million by 2023, at a Compound Annual Growth Rate (CAGR) of 16.2% during the forecast period. Large amounts of data are collected and integrated by experts to put forward alerts and solve complicated problems related to soil quality or other operational incompetencies.
How is data derived in smart agriculture?
IoT
The Internet of Things has positively impacted many industries, and agriculture is undoubtedly one of them. IoT helps fight several farming challenges, be it weather conditions, climate change, or the quality of seeds and pesticides. Sensors were introduced a few decades ago, but they were handled conventionally. With the introduction of IoT in agriculture, technologically advanced sensors are used that derive live data. For example, remote sensing assists in tracking the condition of crops in a field regularly to recognize any possible risk and take necessary precaution accurately. Cloud-based data storage with IoT platforms plays a vital role in smart agriculture. Whether farmers want to know about crops’ live status or the weather, real-time data analysis is quite impactful. Few recent use-cases show how IoT and big data in farming have been successful, like Vodafone’s Precision Farming solution that lets farmers use only the amount of fertilizer needed or Digital Transmission Network (DTN). Using DTN, farmers can examine updated weather data to manage their farmlands better.
Space monitoring
Turning to technology has been one of the most sought-after ways to maintain smooth global food supply, and space monitoring is an integral part of smart farming. Be it frequent droughts or locust outbreaks, space monitoring through geospatial tools is a boon to agriculture. The remote sensing satellite imagery offers critical data for monitoring crops, soil development, and other climatic conditions. For example, climate, soil, and other assessments from satellites assist farmers in planning the required time and amount of irrigation needed for their crops. In this way, the adverse effects of food shortages and famines are tackled.
Use of big data in farming
While smart farming and big data are opening new gates for profitable agricultural yield, the collection of real-time data to forecast various situations is something that has improved farming practices.
Here’s how big data is being used in recent times.

  • Prognosis of yield
    Prediction of yield is one of the most meaningful uses of data in farming, as it helps farmers evaluate specific questions like what to plant, where, and when to plant crops. This is majorly done with specific mathematical and machine learning models and sensors that examine data around yield, weather status, biomass index, and much more. For farmers, the decision-making process becomes smooth and convenient, with an improved approach towards crop production.
  • Effective management of risk
    The chances of crop failure can quickly be evaluated with big data, and hence, farmers find it quite useful. Daily satellite images are combined with relevant data to identify weather scenarios like wind speed, humidity level, rainfall, and much more. Back in 2014, an integral suggestion from data scientists to Colombian rice farmers had reportedly saved millions of dollars in damages due to changing weather patterns. Damages due to changing weather conditions or other reasons can be evaded with data science.
  • Improvement of farm equipment
    Equipment manufacturing companies have integrated sensors in farming vehicles and farm equipment. This deployment of big data applications help farmers monitor the long-term health of farm equipment and also lets users know about the availability of tractors, due dates or servicing, etc.. Recently, scientists and researchers from Connecticut University have brought forward some in-field soil moisture sensors that reduce excessive water consumption during farming by at least 40%.
  • Bridging the gap between supply and demand
    Increasing supply chain transparency is one of the crucial benefits that big data offers. One of the main struggles in the food market is to bridge the inevitable gap between supply and demand. According to a report by Mckinsey, at least one-third of food produced in a year for consumption is wasted. With real-time data analysis, several forecasts have become more accurate, and integrated planning is now possible. This also helps to track and optimize routes of delivery trucks.
The cloud-based ecosystem through IoT and big data is gearing up to revolutionize agriculture as space monitoring, or cloud-based apps are helping farmers adjust production in tune with market demand. Today, the scope of big data in agri-tech is exceptionally significant. Lloyd Marino of Avetta Global, an eminent big-data expert, has pointed out, “Big data in conjunction with the Internet of Things can revolutionize farming, reduce scarcity and increase our nation’s food supply dramatically; we just have to institute policies that support farming modernization.” Working together to enhance the use of smart farming and the application of data in agriculture would strengthen agri-tech solutions for a better future. This will not only increase the production efficiency of crops but also mitigate the problems of higher demand for food and supply shortage.
References
https://www.un.org/development/desa/en/news/population/world-population-prospects-2019.html
https://nasscom.in/sites/default/files/media_pdf/NASSCOM_Press_Release_Agritech_Report_2019.pdf
https://www.talend.com/resources/big-data-agriculture/
https://www.researchgate.net/publication/279497092_303_Performance_of_a_New_Low-cost_Soil_Moisture_Temperature_and_Electrical_Conductivity_Sensor
https://www.mckinsey.com/business-functions/sustainability/our-insights/feeding-the-world-sustainably
https://www.forbes.com/sites/timsparapani/2017/03/23/how-big-data-and-tech-will-improve-agriculture-from-farm-to-table/?sh=a325bec59891
Pavan Thallapally
#Business | 7 Min Read
Since its inception, the human era has evolved through cognitive, scientific, and industrial revolutions, and experts around the globe have designated this era as the Information age. With these natural selections, we designed principles of life that are replaced by the digital world. In these physical and virtual environments, humans have started thinking smartly to increase experience, improve productivity, and reach out to the world as identity. This will help humanity to make better decisions and future actions for sustainable movement in human destiny.
These digital products enable food, communication, travel, health, and many global society things. Tangibly, human beings are adapting to these products; however, some problems need to be solved for the humans who are impaired permanently, temporarily, and situationally. Solving these accessibility problems discards social boundaries, improves natural ways of interactions, reduces errors, enables seamless transitions, and market reach out.
Designers design a product by empathizing with the target personas, who are represented by their mental model, and then being inspired for choices and decisions to add emotions, navigations, and solutions. But to diversify the product, it needs to embrace all the aesthetics to interact with the product. Below are the checklists and common mistakes committed while designing.
Quick checklists to consider
Textual Representation
  1. Avoid complex vocabulary and describe it with fewer words.
  2. Guide the content with visual hints like icons and images.
  3. Design with simple hierarchy and navigations.
  4. The information should render across all devices, different resolutions, and without overlapping the text.
  5. Develop the text and elements larger by default so that it renders across all types of devices.
Hierarchy
  1. Headings drive navigation of users from one section to another, which also helps the assisting technologies.
  2. Listing the order or groups to describe the collective information.
  3. Notifying warnings, errors, and success statuses for every corresponding input label.
  4. Helping with subtitles and voice over and taking voice commands from the users when engaging with audio, video and other media-related content.
Color Contrast
  1. Keeping interactive elements with better contrast ratios.
  2. Always combine colors with visual cues for better interactions.
  3. All the textures, patterns, and paradigms need to be described with actions and content.
  4. Ensure all the elements will receive and understand equal amounts of information.
  5. Use plugins to your design tools to simulate the color principles like stark etc.
Input Devices
  1. All the elements and content must be operable by keyword, pointing devices, much more.
  2. Make sure the focus of the element is applied on the screen for every navigation.
  3. Adding the visual layout to see the keyboard focus.
  4. All the images, icons, graphs, charts should have an alternative text to understand.
Access to assistive technologies
  1. Allow assistive technologies to read and take the inputs.
  2. Enable the product for various assistive technologies.
  3. Testing all the user journeys with real users for better user experience and usability.
Social Privacy
  1. Awareness of privacy and data access.
  2. Visibility in storing and deleting their data.
  3. Product availability in offline mode.
“ Addressing accessibility in the design system is the right place to start your inclusivity efforts because it’s the foundation of your product.”
Common mistakes committed while designing
Typography
Humans with visual impairments can find certain letters and styles confusing. Therefore, potentially confusing letter heights, weights, lengths, and sizes must be clearly defined.
  • The line-height Web Content Accessibility Guidelines recommend a value of 1.5 for body copy. Evaluate, reduce, or increase as necessary.
  • Scanning long lines of text is testing for your eyes. Research indicates that the average online line length is around 70 – 80 characters. Limit lines to no more than 16 words.
  • Centered text is not accessible text. The act of centering creates different starting positions. This creates issues for the visually impaired.
  • Never use ‘ALL-CAPS’ in body copy settings. Use Small Caps if short-length capitalization is required. They are great for emphasis, abbreviations, and subheaders
Contrast
There are a lot of things to consider when making your content accessible from a color and contrast perspective, including:
  • Be careful with light shades of color, especially grays–they are difficult to see for people with low vision.
  • Ensure the icon fits into equal sizes. If some have circles in it, make sure these circles have the same diameter. Icons should have a consistent style.
  • Do not rely on color alone to convey info to your users. For example, make sure your links have underlines or some other visual indicator besides color.
Layouts
Remember that a designer and an artist are different professions. In design, we create a product for people, which means your creative impulses can be applied only to not interfere with the user experience.
  • Avoid experimental positioning of elements on a screen/page/card without good reason. Otherwise, the user may get confused and leave your site or delete the application.
  • While considering the structure and layout, we should also consider using a mouse, keyboard, touchscreen screen, or another adaptive technology device. Once this skeleton structure is ready, the styling of each sentence and paragraph comes into the picture.
  • It is essential to check with the developers about screen sizes; a website design may be challenging since it will be used on a wide range of devices. Because front-end developers usually don’t have a design background and will implement the design exactly how it was provided to them.
  • Use provided heading styles in correct order to create structure. Avoid formatting headings manually to be large and bold.
Navigation
Without a doubt, the most stalling aspect of any user interface is usually the navigation. So, dedicating extra attention to this area will inclusively improve the experience for all types of users. Bear in mind the following tips to reduce confusion:
  • When linking, use anchor text that sets realistic expectations.
  • Maintain anchor text consistency when two links lead to the same destination.
  • Implement breadcrumbs to convey where the user stands in an event sequence.
  • Highlight the current keyboard focus (for input fields, a blinking cursor isn’t enough).
“Accessibility is solved at the design stage.”
Why designers should consider accessibility while making design decisions.
User actions have a vital role in the application. So, for every action that the user takes is preceded by a designer’s decision and plan. It helps users complete their tasks with a better experience and navigation.
  1. While making decisions, you’re likely to discover and correct usability problems that affect the users so that you are also solving users with the age-related accessibility needs that are rapidly growing customer segments.
  2. Businesses want to avoid claims of discrimination and legal action when they launch the product in the market. So, designers will help them by implementing accessibility standards.
  3. Before it goes to development, design decisions while building the prototype model and usability save development time and stakeholders investments.
  4. One more advantage for developers and designers that websites are created with accessibility in mind is a higher-quality code base. For example, accessibility testing tools such as the a11y® testing platform can also identify errors that make general usability problems.
Conclusion
The accessibility principles benefit humans with disabilities, including expanding your customer base, polishing your brand image, increasing your search engine rankings, and making general improvements to usability.
References
https://rangle.io/blog/can-a-design-system-be-accessiblee
https://www.granite5.com/
https://www.bounteous.com/canada/node/63166/?lang=en-ca
https://dzone.com/articles/guide-for-htmlcss-developers-creating-layout-per-w
https://www.aditus.io/patterns/multiple-navigation-landmarks/
Renjith Raju
#Technology | 4 Min Read
Transit Gateway is a highly available network gateway featured by Amazon Web Service. It eases the burden of managing connectivity between VPCs and from VPCs to On-premise data-center networks. This successfully allows organizations to build globally distributed networks and centralized network monitoring systems with minimal effort.
Earlier, the limitations with VPC Peering made it unable to create or connect VPN connections to On-premises networks directly. Also, to use transit VPC, a VPN Appliance had to be purchased from AWS Marketplace and connect all the VPCs to On-premise networks. This increased both the cost and maintenance.
Advantages of AWS Transit Gateway
  • Transit Gateway is highly available and scalable.
  • The best solution for hybrid cloud connectivity between On-premise and multiple cloud provider VPCs.
  • It provides better security and efficiency to control traffic to various route tables.
  • It helps to manage the AWS account routing globally.
  • Manage AWS and On-premise network using a centralized dashboard.
  • This helps to protect against distributed denial of service attacks and other common exploits.
Cost comparison between Transit Gateway vs VPC peering
VPC Peering Transit Gateway
Cost per VPC connection None $0.05/hour
Cost per GB transferred $0.02 (0.01 charged to sender VPC owner and 0.01 charged to receiver VPC owner) $0.02
Overall monthly cost with 3 connected VPCs and 1 TB transferred Connection charges – $0
Data Transfer Cost -$20
Total = $20/month
Connection charges- $108
Data Transfer Cost – $20
Total = $128/month
Transit gateway design best practices:
  • Use a smaller CIDR subnet and use a separate subnet for each transit gateway VPC attachment.
  • Based on the traffic, you can restrict NACLs rules.
  • Limit the number of transit gateway route tables.
  • Associate the same VPC route table with all of the subnets that are associated with the transit gateway.
  • Create one network ACL and associate it with all of the subnets that are associated with the transit gateway. Keep the network ACL open in both the inbound and outbound directions.
HashedIn, a cloud organization has a master billing account, logging, security, hosting networking infrastructure, a shared services account, three development, and one production level account for the architecture below. AWS Transit Gateway is the single point for all connectivity.
For each of the accounts, VPCs to Transit Gateway are connected via a Transit Gateway Attachment. Each account has a Transit Gateway Route Table, with an appropriate Gateway Attachment that sends traffic, and hence, subnet route tables can be used to connect from other networks. The Network account transit gateway is connected to the On-premise data center and other networks.
Here are the steps that are observed to configure multiple AWS accounts with AWS Transit Gateway:

  • Firstly, access the AWS Console
  • Up next, create the Transit Gateway
  • Lastly, create Transit Gateway Attachment
    • VPC
    • VPN
    • Peering Connection
The three available options while creating Transit Gateway Attachment:
Using VPC, an ENI is created in multiple availability zones. A TGW attachment is needed to be developed in all availability zones in the VPC so that TGW can communicate with the ENI attachment in the same availability zone.
  • Create Transit Gateway Route Table
  • Add the routing rule for the respective TransitGateway ID
  • Create an association and attach TransitGateway attachments
  • Create static routes
Transit Gateway Attachments are associated with the Transit Gateway Route Table. However, you can create multiple attachments related to a single route table. Propagation will dynamically populate the routes of one attachment to a route table of another attachment. Associations help to attach Transit Gateway Attachments.
In addition, the network manager helps in reducing the operational complexity of connecting remote locations and other cloud resources. It also acts as a centralized dashboard to monitor the end-to-end networking operational activity in our AWS account.
Conclusion
The Transit Gateway is a centralized gateway where we can manage AWS and On-premise networks on a single dashboard. It also helps simplify network architecture, which was earlier complicated in managing inter-VPC connectivity and Direct Connect.
Manish Dave
#Data Engineering | 7 Min Read

Amazon Elasticsearch Service is a fully managed service that enables you to search, analyze, and visualize your log data cost-effectively, at petabyte-scale. It manages the setup, deployment, configuration, patching, and monitoring of your Elasticsearch clusters for you, so you can spend less time managing your clusters and more time building your applications. With a few clicks in the AWS console, you can create highly scalable, secure, and available Elasticsearch clusters.

How is AWS Elasticsearch Costs Calculated?

Elasticsearch consists of Master and Data nodes. AWS ES does not cost anything for the usage of service. The only cost you bear is the instance code. Here are 2 types of nodes in ES.

Master nodes

Master nodes play a key role in cluster stability, and that’s why we recommend using a minimum of 3 dedicated master nodes for production clusters, spread across 3 AZs for HA. It doesn’t have additional storage requirements, so only compute costs are incurred.

Data nodes

Deploying your data nodes into three Availability Zones can also improve the availability of your domain. When you create a new domain, consider the following factors:

  • Number of Availability Zones
  • Number of replicas
  • Number of nodes

Note that all types of EC2 instances are not available for ES, only a subset is allowed to run as either data or master node.

Don’t use T2 instances as data nodes or dedicated master nodes. As these are burstable, you don’t get dedicated CPUs. Also, they don’t allow UltraWarm setup for cost-effectiveness.

Storage

Another factor to consider while tracking cost is EBS Storage attached to data nodes. Even though storage costs are smaller compared to nodes, on a petaByte scale, it becomes significant. Use provisioned IOPS only when it is required; these cost 1.5x of general-purpose SSD.

UltraWarm

This is an option with AWS ES to optimize storage and, thus, cost. With UltraWarm Enabled, you will have another node which moves your infrequently accessed data to S3, instead of EBS. Choosing S3 saves costs for archived or infrequent accessed data. The only cost you pay with UltraWarm is the cost of the chosen storage. UltraWarm has 2 node types – Medium(1.6 TB capacity) and Large(20 TB capacity). And the minimum number of node types is 2, i.e. 2 medium or 2 large. Choose this option only when you are dealing with large scale storage.

Elasticsearch Monitoring & Analysis

The first step to optimize cost is to become cost and usage aware; monitor existing domain with these tools.

AWS Cost Explorer

Use billing and cost explorer services to get a breakdown of the total cost.

Event Monitoring

Amazon Elasticsearch Service provides built-in event monitoring and alerting, enabling you to monitor the data stored in your cluster and automatically send notifications based on pre-configured thresholds. Built using the Open Distro for Elasticsearch alerting plugin, this feature allows you to configure and manage alerts using your Kibana interface and the REST API, and receive notifications.

Here are some screenshots of those metrics. These allow you to understand the actual resource requirements of your ES domain.

CloudWatch

You can also get CloudWatch-based metrics. These are charged per-metrics and per-domain basis, which helps in getting the insight into the utilization per node. Check out recommended metrics on the AWS website.

Controlling Elasticsearch costs

To control costs, you need to answer the following first based on the above metrics –

  1. How many and what type of data nodes you need?
  2. How many shards do you need?
  3. How many replicas do you need?
  4. How frequently do you access the data?

Pre-process your data -Most often we find redundant and duplicate events in our log files since we aggregate data from multiple sources. Hence, ensure that the data you store in ES is relevant, and also try to reduce or dedup your logs. Another type of preprocessing can be done by sending a reduced stream to Elasticsearch, and for this, you can use Kinesis Firehose.

Archive Data – Blocks of data that are accessed less frequently should be moved to S3 using UltraWarm if at all required. Use policies to run tasks like archival, re-indexing, deleting old indexes regularly, etc. to make it an automated process.

Overheads – Underlying OS and ES has its own storage overheads to run and manage nodes. So, you will have to provision a lot more than the actual source data. 5% of OS requirements per node, 20% of ES requirements per node, and 10% storage for indexes per node is minimum overhead; multiply all that by 2 to fulfill replica and HA requirements. Thus, the actual storage available is less than you pay for. An AWS recommended formula for same is –Source Data * (1 + Number of Replicas) * 1.45 = Minimum Storage Requirement

The number of shards – Each shard should have at least one replica. So, by choosing the wrong number of shards, you’re adding more than 2x the storage problem. A recommended way to calculate shards is provided by AWS, but a more pragmatic approach we took, was to break down your storage requirements into chunks of ~25 GBs. This size is big enough to properly use the available RAM size in nodes but not big enough to cause CPU errors by most node types, in AWS ES instance types. To be more specific, ensure that a single shard can be loaded in memory and processed easily. But, keep Memory Pressure in mind, i.e. only 75% of memory should be used by queries. Read more in the JVM section below.

Number of Nodes – If you read all the above points carefully and combine them, it makes sense for you to have a minimal number of nodes, i.e. 3 AZs should have 3 nodes. Fewer nodes will result in lesser overhead related wastage.

Use VPC – Using VPC and 3 different subnets for 3 AZs is not only secure, but you can also reduce the data transfer cost over the internet.

Use reserved instances – You can reserve instances for a one or three year term, to get significant cost savings on usage as compared to on-demand instances.

Snapshots – You can build data durability for your Amazon Elasticsearch cluster through automated and manual snapshots. A manual backup will cost you S3 storage prices. By default, the Amazon Elasticsearch Service will automatically create hourly snapshots of each domain and retain them for 14 days at no extra charge. However, choosing manual or relying on automated snapshots should be a business decision based on affordable downtime, RTO, and RPO requirements.

JVM Factor – Elasticsearch is a java based application. Hence, it requires java virtual memory for the better running of an elasticsearch. Few tips –

  • Use memory-optimized instance types.
  • Avoid queries on wide ranges, such as wildcard queries.
  • Avoid sending a large number of requests at the same time.
  • Avoid aggregating on text fields. This helps prevent increases in field data. The more field data that you have, the more heap space is consumed. Use the GET _cluster/stats API operation to check field data. For more information about field data, see the Elastic website.
  • If you must aggregate on text fields, change the mapping type to a keyword. If JVM memory pressure gets too high, use the following API operations to clear the field data cache: POST /index_name/_cache/clear (index-level cache) and POST */_cache/clear (cluster-level cache).

Conclusion

Cost optimization is not a one time task, and you should keep a constant eye on the requirements and cost explorer to understand the exact need. Observe the monitoring charts, since, if the data reduces, then Elasticsearch usage will also reduce that can help in minimizing the number of nodes, shards, storage, and replicas.

Himanshu Varshney
#General | 5 Min Read

Well, every individual goes frenzy when it comes to your first job, and I am no exception. As a young developer, I myself wasn’t sure of the right career path when I graduated. Back then in 2003, most of the IT services were focused on backend operations and had limited scope for product development. There were only a handful of product companies in the market.

As a young engineer, I craved to experiment, learn, code, and build new products. Like many new graduates, I too was weighing in the pros and cons of joining a service company versus a product company. I wasn’t sure of the technology that I loved the most. But I was determined to keep my passion for building new products and learning new technologies alive.

Luckily for me, I got the best of both worlds when I joined Trilogy, a product development service company. Back then, the concept of SaaS product development services was still new. As people realized the business benefits of SaaS product development services, the demand and market for SaaS began to increase. Over the years, I got to work on a variety of technologies and lead projects for Fortune 500 companies across the world.

Now, after nearly a decade of experience in product consulting, I can confidently say that joining the product development services company as a newbie was the best career decision I made in my life.

While there are many perks of joining a pure product company, in my personal experience a service company is a good choice in the initial stages of your career (a stage where you are uncertain of what you want in your life).

My stint at several SaaS product development services gave me the necessary exposure, and experience I needed. This paved a way to nurture my real interest and start my own company, HashedIn Technologies in 2010.

Here are the top 5 reasons to join a Product Development Services Company in the initial stages of your career:

1) Technical Breadth

When it comes to technology, a services company is like a love cum arranged marriage, while a product company is like an arranged cum love marriage. In a product company, you are restricted to a few choices and technologies to work with. On the contrary, a product development service company gives you the chance to date and explore various technologies before you finally settle with the one that you are really passionate about. You are given the opportunity to play with multiple tech stacks and get wide exposure, before choosing your area of expertise. This also gives you the flavor of trying everything. This in a way helps you to get the right kind of exposure to diverse technologies in the initial stages of your career.

2) Right Mentors

The best part about working at a product development service company is the kind of mentors you get to work with. A good mentor can accelerate career progression and help you grow as a person. While there is a lot of focus on growth in product companies, there is limited opportunity for mentorship. In a race to move fast without breaking things, all efforts and resources are focused on scaling fast. You are mostly on your own. As a newbie, this can be scary. At a product development service company, you have the opportunity to work with experienced mentors who have been there, done that, and can guide you towards the right solution.

3) Work With Industry Leaders

In a product development service company, you are graced with the chance to work with a variety of customers and leaders like AWS, Redis, and Heroku. After your stint in an IT service company, you will have familiarity with multiple domains which would eventually help you to upskill yourself to a greater extent.

4) Culture of Learning

The work culture in product development service companies are very people friendly and focuses a lot on aspects like hiring, training, and events. As a fresher, I got to learn a lot from other teams with a wide exposure to varied scenarios of learning.

5) Wear Multiple Hats

If you want to get a taste of something more than just development, a product service company is the best option. As a newbie, I got to interact with clients, and manage projects besides just coding. In the early days of your career, this opportunity to explore multiple roles and don many hats helps you build an astounding portfolio for yourself.

To conclude, your professional success depends on how well you know what you want in life. Some people figure this out early on. Some people take their time to find their true calling. There is no one right way to succeed. As a newbie, if you haven’t yet figured out life, it is okay. Join a product service company to kickstart your career, get the exposure you need to figure out what you really need and venture into a horizon that would lead you up the ladder of success.

Noorul Ameen
#Business | 2 Min Read
Communication is a process by which information is exchanged between individuals/groups through a common system of symbols, signs or behavior. In a business organization maintaining effective communication is of great value and importance, as it leads to create the desired effect or the required action. However in today’s business landscape where we work across geographies, time zones and cultures, maintaining a space for effective communication is always a challenge.
Communication Gap is when the meaning intended by the speaker or sender is not what is understood by the recipient. There might be several reasons for communication gaps to arise in the workplace. It is as much important to bridge the communication gap between employees, as it is to maintain effective communication in the workplace. Thus, identifying as to how the communication gaps emerge in the workplace and taking the necessary steps to maintain effective communication at work will be of great benefit to a business organization.
Following is HashedIn User Experience Team’s take on this important but often “less-attended-to” topic. At HashedIn UX, the designers perform role plays, storyboards and interactive sessions even for the dumbest problem on earth as we understand the value these techniques bring to the table and their impact on business when people neglect their communication gaps.
To understand how this gap starts and gets neck-deep during product development, HashedIn UX designers role-played a game activity. The game is played with two players, one narrates a picture without seeing it being drawn and the other player draws the picture being narrated; both facing opposite sides.
See for yourself how communication gaps creeps in….
In the next article, I am going to share with you the techniques HashedIn UX devised to manage this gap effectively. Stay tuned!
Judelyn Gomes
#Business | 4 Min Read

User experience (UX) plays a paramount role in achieving organizational goals.  The success or failure of a project depends on the level of user experience. Every organization is striving to improve the overall experience of the product and delivering quality service to meet customer satisfaction. On the other hand, they make use of various strategies to enhance the user experience. Generally, there are 7 factors that influence the user experience of a project and they are

  1. Useful
  2. Usable
  3. Credible
  4. Findable
  5. Desirable
  6. Accessible
  7. Valuable

Factor #1. Useful

An organization launches the product with the intention of meeting customer needs. People are more obsessed to purchase a brand or company product that yields better results, quickly and effectively. When you make the products that are not useful for the customer, they might get a bad impression on it. So, make sure that you are delivering the products that are useful for customers in a vast manner.

Factor #2. Usable

The products or services that are offered by your company should be ready-to-use. In addition, it is mandatory for you to support the users to achieve their goal effectively and efficiently. So, it is a must to consider the usability feature as critical and develop the products. The first generation of products or services may contain less usability that must be rectified in the second generation and it should be continued to an advanced level.

Factor #3. Findable

An organization must promote their product or service to publish them amid society. Only then people would come to know about your business and reach you to fulfill their needs. Hence, findability feature is very essential for all businesses. It is better to publish your brand in the newspapers, websites, social media platforms and many more platforms that are exclusively available for promotion.

Factor #4. Credible 

The trust of the customers on your product or service is termed as the credible factor of user experience. The trust that you build for your brand among people matters more in your success.

As customers are spending their time as well as money on your company you should never fail to deliver your quality service. Also, implement various business strategies such as adding testimonials, portfolios or partner reference to improve the credibility of your business.

Factor #5. Desirable

The desirable factor is nothing but the reaching level of your product or service in the target market. The most-welcomed products have a more desirable feature for their brand, design, usage, and cost.

The improvement of desirable factor will help you to get more customers with the satisfied one thereby enhancing your business. So, put much effort to improve the value of your brand for achieving better user experience. The desirable factor is measured when you have a comeback of satisfied customers which play a vital role in enhancing your business. It is, therefore, essential to increase your brand value which could result in better user experience.

Factor #6. Accessible

Accessibility is one amongst the essential factor of user experience which denotes that introduction of a product or service and making the same access to all kind of people. For instance, physically challenged people should benefit from your business.

The products made shouldn’t be created for a certain community as it can result in deteriorating the value of your brand. It is important to create a good strategic plan while deciding to start a business which would help develop accessible products to society.

Factor #7. Valuable

Whenever you deliver a product or service, ensure that it brings the best results which add value to your brand. The product should not affect the values of both the company and customers in the long run. The lack of value on either side may spoil your business reputation.

Design your business in such a way that it boosts up your reputation rather than lowering it. In addition to this, you can improve the user experience or value factor by providing chances for customers to say their feedback on your website through reviews.

Final Thoughts

To conclude, these seven factors play a vital role in influencing the user experience in an organization. Go through these factors to understand the core concept and develop your business in accordance with it. Adopt seven factors and enhance the user experience of your brand successfully!