Building a Data Stack on a Budget: An Affordable Guide to Data Management Sujeet Pillai January 17, 2023

Database management

A data stack is a combination of various tools and technologies that work together to manage, store, and analyze data. It typically consists of a data storage engine, an ingestion tool, an analytics engine, and BI visualization tools. In recent years, data stacks have become quite central to an organization’s operations and growth.

Data management is an essential part of any organization, and the way data is managed has evolved over the years. Data lakes and data warehouses were once only affordable by larger organizations. However, with the growth of the open-source data stack ecosystem, this has changed. The open-source data stack ecosystem has grown significantly in recent years, providing powerful alternatives for every layer of the stack. This has pushed the envelope for data stacks and reduced entry barriers for organizations to adopt a data stack.

One of the main reasons why data stacks have become more accessible is the availability of open-source alternatives. For every layer of the data stack, open-source alternatives are available that pack a serious punch in capability. These alternatives are often just as good, if not better, than their commercial counterparts. They also tend to be more flexible and customizable, which is essential for organizations that need to tailor their data stack to their specific needs.

Another reason why data stacks have become more accessible is the availability of cheap cloud resources. Cloud providers such as Amazon Web Services, Google Cloud, and Microsoft Azure provide low-cost options for organizations to set up and run their data stacks. This has made it possible for even smaller organizations to afford a data stack.

Organizations need to seriously consider this framework over a patchwork of point-to-point integrations. A patchwork of point-to-point integrations is often a result of an ad-hoc approach to data management. This approach is not only difficult to manage but also limits the organization’s ability to gain insights from its data. On the other hand, a data stack framework provides a more structured approach to data management, making it easier to manage, and providing the organization with the ability to gain insights from their data.

An Affordable Data Stack

One affordable data stack that organizations can consider is the following:

Storage Engine: Clickhouse

Clickhouse is a column-oriented database management system that can handle large data loads and has great query performance. It runs on commodity hardware and can be self-hosted using Docker. Clickhouse is designed to process large amounts of data, and its columnar storage model also makes query performance great.

Ingestion Engine: Airbyte

Airbyte is an open-source data integration platform that automates the ingestion of data sources and can be monitored and managed from a UI. It can also be self-hosted using Docker and has the ability to use Clickhouse as a sink. Airbyte automates the ingestion of data sources, making it easy to bring data into the data stack.

Analytics Engine: DBT

DBT is a powerful analytics engine that helps organize data models and processing. It’s built on SQL with jinja templating superpowers, making it accessible to a lot more people. DBT is a hero in the data lakes space, helping organizations organize their data models and processing. When building out an analytics process in DBT it’s quite helpful to use a conceptual framework to organize your models. I found this blog excellent to provide a great starting point.

Visualization Engine: Metabase

Metabase is a powerful visualization tool that makes it easy for organizations to gain insights from their data. It has lots of visualizations that cover most bases. The SQL query builder or ‘question wizard’ in Metabase is quite powerful for non-SQL experts to get answers from their data. It also has a self-hostable open-source version and can be set up in Docker relatively easily.

Infrastructure

For infrastructure, we recommend using Amazon Web Services. This stack can be deployed on a single m5.large instance for smaller-scale data and can be scaled up to a cluster configuration for larger data sets. Additionally, the different components of the stack can be separated into different servers for scaling. For example, if many Metabase users are accessing the data, it may be necessary to move Metabase onto its own server. Similarly, if ingestions are large, it may be necessary to move Airbyte onto its own server, and if storage and queries are large, it may be necessary to move Clickhouse into a cluster formation. This allows organizations to scale their data stack as their data needs grow.

Production considerations

When it comes to taking the data stack to production, there are a lot of other considerations. Organizations should ensure reliable, fault-tolerant backups, set up security and role-based access, and build DBT models to cater to multiple use cases and normalize data values across sources. Other considerations may include monitoring and alerting, performance tuning, and disaster recovery planning.

Reliable, fault-tolerant backups are crucial to ensure that data is not lost in the event of a disaster. Organizations should have a well-defined backup and recovery plan in place. This should include regular backups, offsite storage of backups, and testing of backups to ensure they can be restored in case of an emergency.

Security and role-based access are also crucial considerations. Organizations should ensure that only authorized personnel have access to sensitive data. This can be achieved by setting up role-based access controls, which ensure that only users with the necessary permissions can access sensitive data.

Building the DBT models to cater to multiple use cases, normalizing data values across data sources, etc., are also essential. Organizations should ensure that their data is accurate, consistent, and reliable. This can be achieved by building DBT models that cater to multiple use cases and normalizing data values across data sources.

Finally, monitoring and alerting, performance tuning, and disaster recovery planning are also important. Organizations should ensure that their data stack is performing at optimal levels and that they are alerted to any issues that arise. Performance tuning is also necessary to ensure that the data stack is performing at optimal levels. Disaster recovery planning is also crucial to ensure that data can be recovered in the event of a disaster.

Conclusion

In conclusion, data stacks have become increasingly affordable and accessible for organizations of all sizes. The open-source data stack ecosystem has grown significantly, providing powerful alternatives for every layer of the stack. Organizations should seriously consider adopting a data stack framework over a patchwork of point-to-point integrations to drive growth and operations. A data stack framework provides a more structured approach to data management, making it easier to manage, and providing the organization with the ability to gain insights from their data

Deploying a data lake to production with all these elements is a non-trivial technical exercise. If you do not have this expertise in-house you should consider using the services of a consulting organization with expertise in this area like Incentius. Drop us an email at info@incentius.com and we’d be happy to help.

 

Pros and Cons of Using Tailwind CSS Sujeet Pillai December 8, 2022

What is CSS?

CSS is an abbreviation for Cascading Style Sheets. CSS specifies how HTML elements should appear on screen, paper, or other media platforms. With CSS, you can control the appearance of multiple web pages at once by changing the styles in a single style sheet. Using CSS allows you to separate the content of your HTML document from the visual design, making it easier to update and maintain the look of your website. It also helps to improve the accessibility and performance of your website by reducing the amount of HTML code and allowing you to use more efficient selectors. CSS is one of the open web’s core languages and is standardized across Web browsers according to W3C specifications.

 

What is Tailwind CSS?

Tailwind CSS is a utility-first CSS framework and was released in May 2019. It gives you the tools and the standardization to develop exactly what you want instead of limiting you to a predetermined design. It is especially useful for building prototypes or small, dynamic projects where the design may change frequently. Other CSS frameworks or libraries, such as React or Vue.js, can be used in conjunction with Tailwind CSS. It can easily be utilized stand-alone without requiring other frameworks or libraries. 1370 programmers on StackShare have acknowledged using Tailwind CSS.

 

Pros of Using Tailwind CSS

1) Easy and free-to-use

Tailwind styling is much easier to keep up. A quick and easy way to create unique user interfaces is with Tailwind CSS, a free and open-source CSS framework. The use of Tailwind on your website facilitates development and responsiveness and offers a great degree of customization. You can use more than 500+ components in your Tailwind projects and UI designs. Tailwind aids in the implementation of a consistent design system (without being too restrictive). Several well-liked tools that integrate with Tailwind CSS include Gatsby, Tailwind Starter Kit, Headless UI, Tailwind UI, and windiCSS.

2) Better CSS Styling Process

When it comes to styling HTML, Tailwind CSS is the fastest framework available. As a result, styling elements directly make it simple to develop attractive layouts. It is achievable because Tailwind provides hundreds of built-in classes that eliminate the need to start from scratch when creating designs. Your production CSS only contains styles that you use. Making style changes without overriding things is simple.

3) Highly customizable

Tailwind CSS is a highly flexible framework. Although it has a default configuration, a tailwind.config.js file can easily override it. The configuration file allows for simple customization of color palettes, styling, spacing, themes, and so on. Tailwind combines the ideal utilities for easy project management and maximum customer satisfaction. This, in turn, provides more convenience. According to reports, 332 businesses use Tailwind CSS in their tech stacks, including MAK IT, Superchat, and überdosis.

4) Responsiveness and Security

A website can be viewed on various devices with varying resolutions, such as smartphones, laptops, and iPads. These are known as breakpoints, such as small, large, or medium. So, while coding, a developer can assign classes based on breakpoints to ensure that it does not break on any devices. Each HTML element can have responsive code changes applied to it conditionally. As an outcome, there will be no more media query codes in a different CSS file. With this level of control, you can change any element at any given breakpoint. Thus, you can design the layout directly in an HTML file using Tailwind’s pre-built classes. Aside from that, since its initial release, Tailwind has proven to be a stable framework. As it was developed by top-tier engineers, it has resulted in a framework that is relatively free of bugs and breaks.

5) Optimization using PurgeCSS

PurgeCSS is a tool that helps optimize the CSS in your website by removing any unnecessary styles that are not being used. When building a website, it is common to use a CSS framework like Bootstrap, Materializecss, or Foundation, but you will likely only use a small portion of the styles these frameworks provide. PurgeCSS works by analyzing the content of your website and comparing it to the CSS styles you have included. It then removes any styles not being used in your content, resulting in smaller CSS files and improved performance for your website.

 

Cons of Using Tailwind CSS

1) Large HTML files

Tailwind CSS being a utility-first approach, frequently involves a large number of classes in HTML. This, in turn, may increase the download size of your HTML file. Further, it can impact the website’s performance, as a larger HTML file may take longer to download and render in the browser. However, it is important to consider the trade-offs between file size and development efficiency when choosing a CSS approach for your project. The utility-first approach of Tailwind CSS can be very efficient for developers, as it allows for the rapid development of custom designs without the need to write custom CSS. Additionally, the extra data in the HTML file is highly compressible, which can help mitigate the impact on performance.

2) Small Learning Curve

Tailwind is fairly simple to use and grasp if you are familiar with CSS. Try studying Tailwind, as it is the same CSS we are writing but in a shorter version. But for those who are unfamiliar with CSS, Tailwind is quite learning-intensive due to the built-in classes it offers. Thus, it may serve as a crutch for developers new to the CSS concept. Using Tailwind CSS effectively can be challenging, even for experienced developers, as it requires a thorough understanding of the pre-built classes. This learning curve can make it take longer to become productive with Tailwind CSS. Additionally, some may argue that relying heavily on the pre-built classes offered by Tailwind CSS could hinder a developer’s ability to understand and master CSS fully. However, others might say that the time saved using the pre-built classes can be used to learn other important skills. Ultimately, the decision to use Tailwind CSS or any other CSS framework should be based on a project’s specific needs and goals.

3) An Installation process is required

To use the latest versions of Tailwind CSS, you need to run an installation process that generates the CSS. This may require additional resources and could potentially incur additional costs. The added complexity could be overwhelming for developers unfamiliar with front-end build processes. However, Tailwind CSS integrates well with many front-end frameworks, and TailwindCLI can help simplify the process. Hence, it is important to consider your project’s specific needs and goals when deciding whether to use Tailwind CSS or another CSS framework.

4) Tailwind CSS is not an all-rounder

Tailwind CSS is only capable of doing some things. While Tailwind’s capabilities are expanding, some CSS properties and advanced features are beyond the scope of what it can do. It means you may need to use inline styles or create custom classes concerning Tailwind CSS to get things done. Is this a bad idea? No, but it does imply that Tailwind isn’t a one-size-fits-all solution.

5) HTML and Styling are mixed

Tailwind works differently than most CSS frameworks because you don’t have to write your own CSS rules. While this is advantageous for those who are unfamiliar with CSS, it also means that Tailwind incorporates style rules into your HTML files,  violating the ‘separation of concern’ principle. Many developers believe that separating page structure and style makes the Tailwind markup process more verbose.

 

Conclusion

Tailwind CSS has many benefits in terms of maintainability, efficiency, and development speed. In addition to all of that, it has a fantastic ecosystem of UI elements and pre-existing designs. You can use comprehensive documentation and free tutorial videos on YouTube. Although it has a few limitations, Tailwind’s extensive library of CSS classes is useful for developers looking to improve their applications or websites.

Exploring Advantages and Disadvantages of Using Node.js Sujeet Pillai November 23, 2022

What is Node.js?

The Node.js platform helps execute JavaScript code on the server side. Node.js is used to create apps that need a continuous connection from the browser to the server and is frequently used for real-time applications such as news feeds, chats, and web notifications. It is usually used as a backend service where Javascript works on the server side of the application, and thus, it is used as both the frontend and backend.

Advantages of Node.js

  • Easy Scalability

i) Node.js supports both horizontal and vertical scalability. This means that using node.js as a backend in applications allows them to be distributed across multiple servers, increasing their horizontal scaling. Further, it aids in vertical scalability, which improves application performance on a single server.

ii) The cluster module is one of the many features included with Node.js. It enables load balancing across several CPU cores, which simplifies delivering desired results via smaller modules without burning out the RAM process.

 iii) Node.js employs a non-blocking event-loop mechanism that ensures high scalability and allows the server to process requests in real time. High-traffic websites are the primary users of Node.js.

  • Cross-functional team building

i) Node.js provides full-stack development, which means the developer can create both client and server-side web applications. 

ii) Assume that your company has two distinct teams in charge of product development. One team is in charge of project development, while the other is in charge of product quality testing. Both teams are working independently and are confined to their respective responsibilities, now, chances are these two teams may have communication gaps. Here, Node.js can help you to prevent this loophole. 

iii) It assists you in forming a team that focuses on improving your development life cycle, allowing you to address challenges instantly. They can communicate with one another directly and devise a solution to their problems if any. This work environment promotes higher productivity and allows you to resolve such issues quickly.

  • High-end performance of Applications

i) Applications with a node.js backend are extremely effective because of their potential to multitask.

ii) Its event-loop and non-blocking I/O operations enable code execution at a significant speed. This, in turn, improves user interface performance.

iii) GoDaddy, a web hosting company, used node.js during their Super Bowl ad campaign and saw significant improvement in their app performance. They could handle 10,000 requests per second with no downtime while employing only 10% of their hardware, all thanks to the high-end performance of Node.js as their backend.

  • Building Cross-Platform Applications

i) Node.js can help to build cross-platform applications, which eliminates the need to spend time writing separate codes for different desktop versions such as Windows, macOS, and Linux

ii) As a result, businesses have more time-to-market intervals and better scalability of their applications. It also provides a more user-friendly experience by supporting a wide range of desktop versions.

Disadvantages of Node.js

  • Inability to perform heavy computing tasks

Node.js is single-threaded and event-driven, which makes it unsuitable for heavy computational tasks. When Node.js receives a large CPU-driven task in its event loop, it uses all of its available CPU power to complete that particular task, leaving other tasks in a queue running. This certainly slows down the event loop, complicating the user interface even more.

  • Unstable API causing a change in codes

The frequent API (Application Programming Interface) changes, which are generally backward-incompatible, are one of the most significant drawbacks reported by Node.js users. Unfortunately, this forces them to change the access code regularly to keep up with the latest version of the Node.js API. As a company that focuses on providing a more user-friendly experience, it can backfire and confuse customers.

  • Lack of library support

Diverse NPM(Node Package Manager) registries and libraries are either of poor quality or incomplete and poorly documented. As a result, if some amateurs create a Node.js web application, monitoring becomes difficult. Only well-qualified and reputable experts with project experience can propel your project to success. Because it is open-source and has such a large pool of libraries and modules, it may suffer from a lack of high coding standards. For exceptional results, it is critical to select the right technology partner to build your web applications.

There is a widespread misconception that Javascript developers are also Node.js developers. To learn node.js, one must be familiar with the Javascript back-end, but this standalone can’t support you to become a Node.js developer. Despite the constant increase in demand, finding an experienced Node.js developer is still challenging.   

 The growing demand for Node.js:

  1. For starters, node is open-source and free to use, encouraging businesses to experiment and improve their scalability.
  2. Node.js is lightweight, which means it employs a simple event-driven architecture. Enterprises want to save money while providing the best features and efficiency possible to expand their market reach. Node’s dynamic functionality is assisting not only large corporations but also small and medium-sized businesses in achieving this goal.
  3. Companies saw significant improvements in productivity, economic growth, and application performance after implementing Node.js in their business strategy. According to studies, 85% of businesses use Node.js to build web applications.
  4. Enterprises based in the United States and Canada claim that incorporating Node.js into their strategy has increased their developers’ productivity by 68%. Adopting a better technology (Node.js) and allowing enough time for its adaptation is proving profitable for the organization in the long run.
  5. Companies such as Netflix, LinkedIn, Amazon, Reddit, eBay, and PayPal have expressed strong interest in implementing Node.js as their backend. Amazon even claims that Node.js has futuristic features, while Netflix asserts that node’s implementation will help to reduce startup time for better expandability.

Did you know?

1) Node.js is eBay’s primary backend programming language.

2) Node.js contributes to a 58% reduction in development costs.

3) Node.js contributes to a 50%-60% reduction in loading time.

4) According to research, Node.js is used by 36.42% of professional developers for its frameworks, tools, and libraries.

In a nutshell:

Node.js is easy to learn and use. You can start using it right away. There are no installation steps required. You only pay for hosting when you start using it. This makes it easy to try it out without spending any money.

Contact the expert team of Incentius for all your application and software development work. We bet our team will make sure that you get the best!

 

Types of Software Developers- Do you have trouble deciding the one you need? Sujeet Pillai

You do not need to be a programmer or a developer to implement digital transformation in your existing business or to introduce a new startup. You may always hire professionals. The trouble is, how do you know who to look for? Let’s have a look.

Software Engineers and Software Developers are not the same!

We understand that it is hard to digest but yes, Software Engineer ? Software Developer! Let us dive deeper into this. 

A software engineer creates tools for developing software using components from a hardware system and tends to tackle problems on a large scale. A software developer, on the other hand, creates software that works on a variety of machines, utilizing pre-built tools to create apps and complete projects. The work of a software engineer is sometimes a highly collaborative activity that necessitates cooperation abilities. The position of a software developer is typically more solitary, allowing them to apply many of the same abilities as their engineering counterparts on a smaller scale.

First thing first, you do not require a software engineer!

Need for Software Developers- What do Developers do?

Consider our contemporary way of life: we’re constantly staring at little and large screens, then pausing to gaze at even larger screens. As long as people are wondering about or want the next better thing, the demand for software developers will continue to rise.

So, what is the role of a software developer? The software developer is the major weapon in the battlefield of innovation and software-based digital transformation. Software is being integrated into the distinctive value offerings of businesses that are digitally changing. As a result of this integration, the organization has begun to become more technology-driven. A developer is a software architect who ensures that the application or web works correctly, is secure, can withstand the test of time, and is easily upgraded and adaptable, just like a “conventional” architect does with a building.

Today, the software is so deeply embedded in our daily lives that the relationship you have with your customers is frequently directly related to the effectiveness of your business operations or, more importantly, the experience your customers have. This emphasis on the developer also entails greater accountability for software quality and implementation.

Common Types of Software Developers you might need

Different developers with much the same tech stacks are unlikely to be able to apply their skills and experience in different areas nowadays. It’s the equivalent of asking your mobile developer to work on a game. He may be familiar with the technologies, but he is not a game developer. As a result, there is a significant difference in developer types.

1 – Mobile App Developer

A mobile developer is familiar with the technicalities of mobile operating systems like iOS and Android, as well as the development environment and frameworks that are used to produce applications for them. Flutter, React Native, Ionic, Quasar are all examples of this. Mobile developers build apps for smartphones such as educational apps (for learning languages, reading enthusiasts, sports fans, and so on), online shopping apps, and so on. Mobile developers work closely with designers, QA engineers, and DevOps specialists in their jobs. Of course, mobile app developers will create your software, which we may use on our Android or iOS devices. As a result, these developers may have expertise in one or both of these platforms.

2 – Web Developer

In the late 1990s and early 2000s, web development became a popular way to break into the software development industry. Web developers are software developers with a focus on website development. There are three types of web developers: front-end, back-end, and full-stack. Front-end developers are in charge of the parts of the site that users see and interact with; back-end developers are in charge of the parts of the site that users don’t see. They tailor the webpage for each user by using algorithms. A full-stack developer is someone who works on both the front-end and back-end of a website. We will go through the full-stack developer in a separate category as well.

3 – Full-Stack Developer

Full stack developers straddle two independent web development domains: the front end and the back end. Full stack technologies refers to the entire depth of a computer application. To put it another way, full stack developers are the development world’s Swiss military weapons. Full stack developers are well-versed in both front end and back end development. These smart professionals can smoothly switch from one development environment to the next since they are experts in many programming languages.

DevOps Engineer

These are Engineers who are familiar with the technologies and tools used to create, deploy, and integrate systems, as well as manage backend software. To simplify the term, backend developers build products, whereas DevOps can design, deploy, and monitor those same apps. To explain the process, consider the following: A programmer (developer) builds apps. Applications are deployed, managed, and monitored by operations. DevOps both devops and deploys, manages, and monitors applications. DevOps involves knowledge of tools like Kubernetes, Docker, Apache Mesos, Jenkins, and the HashiCorp stack, among other things.

Identifying the type of software developer your business needs

You’ve now learned about a few different kinds of software developers. It’s possible that their development environments will collide or not. There are many distinct kinds of software developer responsibilities in various technical functions, and there is currently no globally approved taxonomy, terminology, or industry glossary. The functions of a job description and knowledge of abilities can vary from one company to the next. It is, nevertheless, necessary to learn the specifics of each sort of software developer. Knowing the most typical sorts of software developers will help you bridge the gap between your company’s growth and your knowledge of software development. It’s critical to conduct research and gain information about company operations before beginning or expanding any business enterprise. This will help you see where they intersect and where they diverge. You will be better prepared to hire software developers for your next project if you have a thorough understanding of each sort of software developer.

We, at Incentius, are happy to listen to your doubts regarding Product Development. Have a certain service or idea in mind, or simply want to talk about what’s possible? We’d be delighted to help. Contact us.

5 Key Technology Decisions for a Scalable MVP Sujeet Pillai August 3, 2022

 

Focus on the long term when deciding on technology. Many times, a technology that you feel is ideal for creating an MVP may not be able to scale up for the final product. When making a technology selection, someone who has a thorough vision for the product must be clear on the management and technical aspects. Some of the technical decisions to be made are enlisted below.

Architecture: Microservices or Modular Monolith

Most businesses will be far better off implementing a modular monolith until the scale is large enough for microservices to make sense. For most small to medium-sized tasks, the modular monolith is an excellent architecture. Modular Monolith retains a level of modularity akin to that of microservices. Because communication takes place mostly within the monolith, it is significantly easier than with microservices. 

Although neither is ideal for everyone, one of them may be ideal for you and your development team. We’ll still have a monolith, but it’ll be modular, which means the costs will be significantly lower than with microservices, resulting in resource savings. In most circumstances, introducing a new module will not be prohibitively expensive. Debugging is also a lot easier, thanks to the improved communication between modules. The deployment procedure is much more straightforward. 

Database: NoSQL or RDBMS?

NoSQL databases feature flexible data structures, can scale horizontally, have lightning-fast queries, and are simple to work with for developers. NoSQL databases have numerous characteristics that SQL databases cannot handle. Examples include– without incurring significant expenses and making crucial tradeoffs in terms of performance, agility, and other factors. Most NoSQL database developers or companies are drawn to the rapid capabilities that allow them to get to the market early and deliver upgrades faster.

Modern applications with more complicated, continuously changing data sets require a flexible data model. It doesn’t need to be specified right away, making NoSQL a preferable solution. NoSQL databases, unlike relational databases, can store and handle data in real-time. As a result, NoSQL databases have numerous advantages over relational databases.

Extent of Functionality?

The MVP strategy is consistent with the lean startup philosophy of producing the correct product with a small budget in a short amount of time. The cost of MVP development can be reduced by having only a few of the high-priority, but essential features. The MVP then enables risk-free testing of the software.

Entrepreneurs frequently assume that all features are essential and will be required by end customers. In actuality, elimination is the source of creativity. Remove all extraneous features and swiftly launch the product with only the most basic functionality, which is the product’s main concept. An MVP app concentrates on a single concept and does not include any other features. Concentrate on the most important features. The first and most important thing to remember is to concentrate on the essential functions rather than adding too many features.

Buy or Build?

We frequently try to buy our way out of trouble. There’s nothing wrong with it. The Buy-Validate-Build Methodology enables rapid validation and failure. Entrepreneurs frequently attempt to create everything from the ground up, but is this beneficial for your MVP? It must be measured in terms of cost, time, and effort. In most circumstances, it has been discovered that purchasing is a more practical option. 

However, if you have a unique business model that necessitates the development of a new algorithm as uniqueness, then go ahead and construct one. If your algorithm isn’t the real beacon, go out and buy one before optimizing it. The Buy, Validate, and Build Methodology is the most preferred strategy. This assists in completing the MVP on schedule, establishing a revenue stream, determining product-market fit, and ultimately focusing on profitability.

Prioritize and stick to it

Prioritization aids in maintaining attention to the overall fundamental aim. You must set measurable MVP objectives. Prioritize where to begin, what to construct, when to launch, and when to alter course. All of the features that the MVP will support should be prioritized. Analyze the needs, such as what the users desire, to prioritize the MVP features. And does this product provide them with any benefit?

Sort and prioritize the remaining MVP features on the scale of high priority, medium priority, and low priority. It’s time to start working on an MVP. If a company wants to see how its future product will appear, it may build an MVP prototype. It’s hardly a stretch to claim that MVP teaches priorities.

Conclusion: The purpose of developing an MVP is to quickly launch a product based on a pre-existing concept on a limited budget. This strategy allows a corporation to obtain user feedback on the core product and incorporate it into future upgrades. As a business owner, it is critical to concentrate on key responsibilities such as running a business, raising funds, and establishing a team. With so many complex technological decisions to make when designing an MVP, it’s best to leave it to the experts @ Incentius.

 

Mobile App Localization: Why It is Important for Mobile App Success Sujeet Pillai June 1, 2022

Apps play a significant role in people’s daily lives. Apps are used in every country on the planet. People rely on mobile devices — and the apps they enable – for both business and recreation. Companies who create applications primarily for their home markets are passing up significant income prospects. If you’ve created an app that’s popular in your own nation, chances are overseas markets will be interested in it as well. Companies may acquire a plethora of new users through app localization by creating an app that resonates with individuals. All you need is the correct mobile app localization to get your software into those new areas.

What is App Localization?

App localization is the process of modifying your mobile app so that it can be used in multiple countries by catering to local consumers. This includes employing their native language, introducing cultural sensitivities, and changing keywords in their search queries.

Why Do You Need App Localization?

Because mobile applications are becoming increasingly significant, it is critical that they be localized. Mobile applications have evolved into marketing tools, and app localization ensures that the relevant items appear when potential buyers conduct searches. The information about the items should be in a language that people can comprehend. Localization of mobile applications allows these programmes to meet the demands of users who speak different languages. App Localization entails converting currencies, units, and dates into the appropriate representations.

Benefits of App Localization:

1. Global Accessibility:

The investment in mobile app creation is modest, but it might pay off handsomely if it becomes popular. Are you still certain that your own region is the best market for your software? Bring yourself up to date! If you limit yourself to a single-language app launch – regardless of your native language – you will miss out on massive markets. Mobile entrepreneurs and app developers may reach a vast number of people all over the world by using platforms like Google Play and Apple’s App Store. The linguistic barrier, on the other hand, is unavoidable. If a company is ready to expand, app localization can be a viable conduit for expansion.

The company’s expansion ambitions can be met by focusing on multiple languages and areas. As a result, it is critical to seek for corporate software development companies that can help you design your application with translation. It might help you compete with local developers and get superior outcomes in a short period of time.

2. Economically Beneficial:

Although the notion of app localization may appear intimidating and costly, businesses will see a return on their investment. A company’s market share and revenue may be expected to grow with each new territory it enters. Localization may make your application available in different regions of the world based on their audience, and at no extra expense. As a result, it may assist you in reducing many expenditures associated with mobile application development, which will benefit you in the long term. To get your money’s value out of your mobile applications, consumers must download them. Localized apps perform significantly better on this front. Without app localization, you would have to design an application for each target audience, which might significantly increase costs.

There are presently over five billion active mobile customers worldwide. Consider how many profits you could generate if your mobile app was localized to reach even a small percentage of that number!

3. More Recommendation:

When a company’s app gets new users, the next step is to keep those customers engaged in their offerings. Because the app business is so competitive, it might be difficult to keep clients. Achieving immaculate app localization is one method to ensure user loyalty. If your application is tailored to a local audience, it has a far better chance of getting suggested. Localization enables you to develop your application with content, design, and features that are tailored to the specific locale. As a result, it also aids in promotion.

Because your application checks all of the requirements for the target population, it will undoubtedly be well-optimized, which might help you attract a lot more visitors than you expect. The reviews will improve as a result of the approach, and your revenue will increase multifold in no time. Customers are likely to abandon an app if it fails to match their expectations. By partnering with app localization service providers, businesses can ensure that the localization is appealing and engaging, resulting in a devoted consumer base.

4. Enhance Brand Image:

The number of monthly active users is an important indicator that assures a higher placement in app stores. As a result, a rise in users from other nations translates into higher overall app rankings. Many components of the app localization process are involved, including localizing metadata, which helps buyers to find the app more readily through keyword and phrase searches. Businesses all across the globe invest in app localization because it allows them to become viral in a much shorter period of time. The programme communicates with individuals all around the world because of localisation, which helps it become viral much faster than you may anticipate. This allows you to connect with people all around the world via various social media sites.

Because your application is localized, you may provide discounts to the local audience based on their unique events. You may elicit favorable emotional responses from app users using this method. Customers are then encouraged to trust your company and buy your products and services as a result of these favorable reactions. You can meet your app’s sales goals while keeping a great brand reputation by using app localization tools.

Conclusion:

Understanding your audience in different markets is the goal of app localization. It allows you to have your application created for local users, which might help you develop a more established relationship and improve the program’s speed. Because language and culture have a significant impact on user perceptions and habits, making the appropriate changes to an app is critical for effective worldwide expansion. All of this will be available without the need to develop a unique application for each user base.    

Pros and Cons of Cloud Computing Sujeet Pillai May 23, 2022

Companies should examine their current IT infrastructure, consider their workload and application restrictions. Then they must determine if the cloud will address or eliminate their current challenges and limitations. So, in this blog, we’ll lay out these cloud facts for you, and hopefully, we’ll cover the most significant ones to answer your queries. The following are the benefits and drawbacks of using cloud computing:

What are the Benefits of Cloud Computing?

1. Reduced administrative bottlenecks:

Cloud computing makes management easier within a company. Whenever hardware is purchased or upgraded, the entire process involves several administrative duties that consume a significant amount of time. When it comes to cloud services, all you have to do is evaluate the finest cloud service providers and their plans, then choose the one that best meets your needs. The company’s modest IT department, which it can afford to recruit, will focus only on end-user experience management. Because the majority of the other work, such as maintainability, is handled off-site, you can be assured that your IT infrastructure will be managed effectively at all times. Hence, in the cloud, your system maintenance duties are also removed. These are all left to the cloud service. Your sole criterion is that you have faith in your provider to complete the task on a consistent basis. Cloud resources are available from anywhere on the planet, at any time, on any device, and you have total control over them.

2. Huge, perhaps unlimited storage:

You’ll have to acquire the physical infrastructure that works best for your firm if you don’t use the cloud. However, you never know when you might need to enhance the storage capacity of your company. Cloud computing may free up space in your workplace for extra workstation or conveniences, while also removing the need to budget for future equipment upgrades. You won’t have to worry about installing specialized breakers, high-voltage wiring, specific HVAC systems, or even backup power if you use the cloud. The cloud allows you to easily extend your storage as your demands grow. You can buy as much storage as you need, regardless of whose cloud you use, and it’s significantly less expensive than having to buy new storage gear and software on a regular basis. Most cloud services provide you a large storage space where you may keep all of your important data. Even if you use it all, you can always upgrade to more secure cloud storage.

3. Backup and Recovery:

Data loss may have a severe impact on your organization. You might lose vital information, which could cost you a lot of money, waste your time, and harm your brand’s reputation. Cloud backup is a service that backs up and stores data and apps on a business’s servers on a distant server. In the case of a system malfunction, shutdown, or natural catastrophe, businesses choose to back up to the cloud to keep files and data accessible. You may have all of your data automatically backed up to the cloud on a routine basis. Moreover, the majority of cloud service providers are typically capable of handling data recovery. Because you save all of your data on the cloud, backing it up and recovering it is easier than storing it on a physical device. Consumers can do this on their own private or corporate servers, but cloud-service providers do it automatically and without the need for users to think about it. As a result, compared to other traditional data storage techniques, the backup and recovery procedure is more easier.

4. Increased Automation:   

Software integration in the cloud is typically something that happens naturally. Also, if you employ cloud-based apps, they will be automatically updated without the users’ input. You won’t have to put in any extra work to personalize and integrate your apps according to your preferences. This is normally taken care of on its own. You may even handpick the services and software applications that you believe would work best for your company. Updating a system on a regular basis may be a difficult undertaking. Every individual’s system must be updated by the IT department, which not only consumes time but also reduces productivity. Cloud computing goes a long way in streamlining these routine updates, allowing your staff to focus on the tasks that propel your company ahead.

What are the Disadvantages of Cloud Computing?

1. Limited Control and Flexibility:

Cloud computing makes management easier within a company. Whenever hardware is purchased or upgraded, the entire process involves several administrative duties that consume a significant amount of time. When it comes to cloud services, all you have to do is evaluate the finest cloud service providers and their plans, then choose the one that best meets your needs. The company’s modest IT department, which it can afford to recruit, will focus only on end-user experience management. Because the majority of the other work, such as maintainability, is handled off-site, you can be assured that your IT infrastructure will be managed effectively at all times. Hence, in the cloud, your system maintenance duties are also removed. These are all left to the cloud service. Your sole criterion is that you have faith in your provider to complete the task on a consistent basis. Cloud resources are available from anywhere on the planet, at any time, on any device, and you have total control over them.

2. Dependence on Internet Connectivity:

Because cloud infrastructure is owned, managed, and regulated entirely by the service provider, cloud customers have less influence over the operation and execution of services within it. Customers maintain control over their apps, data, and services, but may not have the same amount of control over their backend infrastructure, such as firmware updates and management or server shell access. The end-user license agreement (EULA) and management policies of a cloud provider may place restrictions on what customers may do with their deployments. It specifies what restrictions the supplier can impose on your deployment use. Even if it doesn’t enable you to change the architecture in any manner, all authorized cloud computing companies provide your company control over its apps and data.

3. Cloud Downtime:

One of the most common criticisms about cloud computing is that it causes downtime. Unfortunately, no company is exempt, particularly when vital business activities cannot afford to be disrupted. The vulnerability of public clouds is that everyone has access to the same server, which increases the danger of attack and slows down the server. Furthermore, because the cloud necessitates a high internet connection and adequate capacity, there is always the risk of a service interruption, which might result in company downtime. Today, no company can afford to lose money due to a disruption in vital business operations. You should prepare for cloud outages and business interruptions. Attempt to reduce the negative impact and provide the highest degree of service availability for your customers and employees.

Takeaway:

The benefits of putting data on the cloud are difficult to miss, but are the drawbacks expected to be dismissed as well? To be fair, companies must do a thorough analysis of their infrastructure and requirements. However, the advantages of cloud computing outnumber the downsides by a large margin. Cloud computing is a managed service that may help businesses of all sizes save time and money.

Free PowerBI Template to analyze Employee Attrition! Sujeet Pillai November 26, 2020

 

Are you aware that it costs employers 33% of a worker’s annual salary on an average to hire a replacement? That is what the research states along with the fact that 75% of the causes behind employee attrition are preventable. Employee attrition analysis as stated in one of our previous articles, focuses on identifying these preventable reasons behind employee turnover and the times of the year when you should expect maximum attrition.

Losing a customer-facing employee is especially concerning as clients are more comfortable speaking with people they have an established rapport with. Moreover, it hampers your organization’s collective knowledge base, relationships and has a direct effect on revenue. Customer issues and escalations are more likely to increase when you have new folks on the job.

Hence it’s no surprise that employee engagement and employee retention are hot topics in the industry today (especially in a sales organization). The primary focus initially will revolve around preventing employee attrition and identifying methods to improve employee engagement.

There are several qualitative and quantitative metrics for organizations to improve retention. The first step however, is to build an attrition model that helps you identify the WHO and the WHEN of attrition. Once you have identified these metrics it becomes easier to focus your attention on key aspects from a retention perspective.

As a part of our conversations we noticed industry leaders being aware of actionable insights that can be drawn from employee information. However, there was a uniform challenge for everyone in terms of absence of an out-of-the-box tool. The ones available in the market are custom made or developed on-demand and need time and monetary investment. These are standalone tools that have separate licensing and integration requirements along with onboarding and learning needs from a usage standpoint. Power BI helps you overcome these challenges and does not need any technical know-how or onboarding. Considering that it comes as a bundled app with Microsoft suite we have put together a free template that will suffice your needs.

All you need to do is open this template with Power BI and connect it with employee information to see insights as seen in the first screenshot above. These graphs have been created using the standard attrition rate formula and displays attrition rate by years. You can also drill down within a year and find out attrition rate by months as seen in the second screenshot above.

Claim your free tool here

The above tool focuses on the trend of attrition within a year. We will soon follow up with a behavioral analyses of attrition and its reasons. Stay tuned and drop us a note at sujeet.pillai@incentius.com if you have any specific needs above and over the free tool we’ve provided. We would be happy to get on a quick consultation call to address it.

Wow! 6 years of Incentius! Sujeet Pillai September 11, 2019

This is such a clear indication how busy we were that we missed to acknowledge our 5th anniversary.
Nevertheless, so here we are, completing our 6th year. I couldn’t be prouder to announce that
since the first year, we have continually grown and evolved as a company. At this moment, when we are all on
cloud nine, I would like to share our little story as to how it all started.

Background:

It all started during the time when Amit and I used to hang out for a couple of drinks every month at one
of our favorite joints and talk about all sorts of lemons life was throwing at us.
Dubeyji (Yeah! That’s how we call our Director of Finance – Mohit Dubey) who had recently shifted to Noida to take care of his
family business, used to join us on the occasional skype chats. Amit & I knew each other since 1999 as from our time as batchmates
at IIT Bombay. We met Dubeyji in 2004 in our early professional days as we all started our consulting careers in the same company.
All of us were amongst the first few employees to join the recently started India operations for the company which gave us a lot of
initial exposure into project ownership, people management and client management. Reminiscing about our time in consulting and the
challenges associated with different kinds of situations at the start of our careers was our favorite pastime.

Ideas abound, 2006-2012:

We were a trio that always had an entrepreneurial itch about us. In those days we met on a monthly
basis to talk about the wacky business ideas we had and try and how we could take it to market.
Looking back our early ideas may have had potential however they were not well-formed. More importantly
it’s now evident that we lacked complementary skills and real world experience in running a company.

Fast forward a few years and, Jaitex exports (Dubeyji’s family business) was doing pretty well; Amit was
close to making Associate Principal at ZS and I having moved to a more core technology role and had managed
a couple of early stage start-ups. Before we knew it, it was 2013 and instinctively we knew that said entrepreneurial
itch had to be scratched sooner rather than later. When you perform your first strike, always pick the move
that you know the most.Healthcare, Incentive Compensation, Sales Force Analytics and technology had to be the
core competencies of our first venture.

The intervening 5-6 years had given us the required complementary skills and experience which was key to pioneering
this dream of ours. Amit doubled up on knowledge in the pharma SFE domain, Dubeyji was ready with valuable seed
capital and real-world accounting and finance knowledge and I finally had the confidence to run the operations of
a company independently. Talking to clients, day in and day out, understanding the practical problems of the industry
and realizing the gaps between their expectations and reality is what we focused on as our vision.

Incorporation, 2013:

On the 9th of September 2013, Incentius – a new age technology focused company was incorporated, and we were ecstatic.
Our initial goal was to use our strengths and become implementation partners of several ICM players and SPM players in
the healthcare space and also provide them other analytics and operations support. We started talking to several clients
but received constant feedback on the challenges they were facing with the existing players in the space. Primary concern
that was shared with us was around the lack of intuitiveness of some of the existing solutions and also the significant focus
on expensive outsourcing model. Hence our vision evolved towards the capabilities and services we provide today. Targeting
at delivering high end analytical solutions, developing advanced technology solutions for complex business process management
systems and creating in-depth analytical models along with data driven reports for enterprises were the key aspects we focused on.
Our strategy was to be a service organization which focused on providing technological support to various established consulting
companies that were trying to expand their portfolios. In mid 2014, Amit joined us, and we focused on more advanced projects which
helped clients solve complex problems and fructify their technology vision in complex business processes. Incentius focused on
engagement models and helped various small consulting firms deliver their projects in a unique rapid prototyping first strategy.

Today, 2019:

I take pride in the fact that today Incentius has close to 25 clients and has delivered more than 100 projects over the years.
The clients that we started with on day one of Incentius are still with us. I am particularly thankful to all my clients who
showed their faith in us and believed that we would push, build and deploy every requirement of theirs. You are our best critics
and have helped us grow an inch every day. A company is nothing without brilliant people and I’d like to thank every one of our
current and former employees who contributed immensely to bring us to today. They are the true assets of Incentius.
Lastly, I would like to thank my Partners – Amit and Dubeyji for their constant support and sticking together through our great
and sometimes truly scary times to make our little dream into reality.

Of course none of this would have been possible without the support and love of my wonderful family – My amazing wife and my
extremely handsome sons.

Cheers!

A low risk approach to design & develop enterprise web apps Sujeet Pillai January 27, 2017

 
             
Enterprises are constantly commissioning, creating and deploying web applications to achieve various objectives across their business operations. The standard operating procedure for such deployments are a proposal stage, an extensive discovery and functional requirements documentation stage, implementation, testing and rollout. This traditional approach turns out to be very an expensive and time-consuming affair and as a result, most ideas are nixed early. To mitigate this situation and boost innovation in enterprises, Incentius proposes an alternative strategy for such deployments. We call this the thick prototype strategy internally.

Thick Prototype Strategy

The core aspect of this strategy is to enable the enterprise business owner commission a ‘thick’ prototype for new ideas at relatively lower cost and quicker turn-around time. This is basically a prototype web frontend that encompasses almost all frontend functionality. It lays out in detail the frontend design and workflow and implements most functionality (or mimics the functionality) in full. This thick prototype will also mock up sample reports, visualizations and dashboards at this stage. The focus is on mimicking the frontend look and feel and workflow functionality.

A new discovery process

The primary change in such a strategy is that the discovery process is now embedded with the prototype creation stage. The thick prototype is iterated upon based on client inputs and hence requirements and functionality is captured in the live visual form rather than in functional requirements documentation. If the client is capable of providing a consistent dataset of mock data to populate this prototype, it makes this stage even more effective. This aspect helps thick prototypes also mimic backend functionality to certain limits as well. A large number of enterprise apps are essentially workflows with a reporting/visualization layer when they’re broken down to the barebones. This is why the thick prototype is quite effective as a discovery process, since workflow interaction points are beautifully explained using visual prototypes rather than extensive textual documentation and screenshots. The dynamic prototype also allows for capturing the exact workflow functionality better than the static screenshot or extensive textual requirements. Business users/owners can understand information flow, visual element positioning and logical operations early on in the app development stage. Usually, the business users/owners don’t get a view into how the application looks and feels like until the UAT (User Acceptance Testing) stage. The visual approach to discovery in the Thick Prototype case also facilitates better user experience based design.

Better funding and reduced risk

The thick prototype concept also enables quicker buy-in from senior stakeholders, allows for rapid deployment of resources and funding for such enterprise application projects. Business leaders are generally more willing to fund projects that they can visualize. These thick prototypes can also help create end-user excitement and hence advocacy for the application. Gaining end-user buy-in early along with incorporating any relevant feedback from them also works as a conduit to better funding to such enterprise apps. In addition to better funding the thick prototype concept also reduces the risk in such applications. The thick prototype is inexpensive to build in comparison to even a full discovery stage in the traditional approach. The thick prototype would only typically run between 15-25% of the total estimated project cost. This prototype is also sufficient to demo to relevant stakeholders and gain buy-in from them. A GO/NO-GO decision point is included after the thick prototype is delivered. As a result, if stakeholders are not agreed to the proposed solution, the total sunk cost in the project is fractional and not substantial. Since the thick prototype discovery process is iterative and open to suggestions and changes early, this substantially reduces the risk of rework during the build phase and hence reduces the risk of scope creep and allows for better control over project costs.

Scaling during build

If the right approach and technology is used in building this thick prototype, it can be appropriately scaled during the build phase to create the full application. Such an approach reduces design anxiety amongst the project owners since the final system will look and work pretty much exactly how the prototype did. The project owners, end-users and sponsors relate a lot better to the final application delivered than in the traditional approach. A supplementary benefit is that training documentation/videos/resources can be built reliably off of the thick prototype which is available early and can be pre-gamed to the user base before the actual launch of the full application. This helps in creating awareness about the solution up-front with the help of different support avenues and allows the user base to hit the ground running in terms of usability of the application during the launch phase.

Spur innovation amongst enterprise apps

The low financial risk involved in this thick prototype concept can lead to spurring innovation in the enterprise application space. A larger number of business owners can try out their ideas without extensive budget allocations. Such innovation will lead to the next big idea in enterprise management that would create greater value for shareholders, efficiency amongst employees and transparency for the upper management.

Case Study

A client recently approached us to create a dashboard to supply their field reps with the ability to track sales performance. They however were very particular that the dashboard should be based on modern design concepts and should be intuitive and easy for the field reps to follow. Our suggestion was to adopt the thick prototype paradigm for the project. Incentius created a prototype frontend for this dashboard with UI/UX elements mimicking how it would perform in production with mock data. The client was able to visualize the frontend and even present it to some key opinion leaders in the field and get feedback before the project was commissioned completely. This also helped the field leaders to provide a list of changes that they would like to see. Incentius iterated on the prototype until all stakeholders were satisfied and then scaled the same prototype to the full application. This approach was critical to the client having complete control over exactly the product they were receiving and made the launch of the dashboard a success. The approach was so successful that the client has requested for extension of this platform to add new feature sets and want to first look at a thick prototype for these additional requests before moving to the full build. This paradigm has become the standard operating procedure for this engagement.

Incentius pioneered this concept through trial and error to get the technology, build approach and project management just right. We have successfully created several such thick prototypes and scaled them to full applications as well. Drop us a note if you would like your business owners to spur game-changing innovation in your organization while being exposed to lower risk.

What do you think of our Thick Prototype paradigm? Please feel free to let us know through the comments.

A free tool to visualize costs associated with salesforce attrition Sujeet Pillai July 21, 2015

 

Voluntary attrition is costly. This fact is well known. This becomes more prominent in industries where
salespeople play a much larger role in driving the top line. It is desirable to keep track of the salesforce
voluntary attrition rate on an ongoing basis and analyze attrition trends and patterns. You can read about this
more here.

The important question however to ask is “how do we quantify the effect of attrition on a salesforce?”. Is it
possible to estimate the financial impact of losing one salesperson? Can we create a model to understand the
impact and implications of various factors?

The key aspect to consider when calculating the financial impact of salesforce attrition on an organization is the
loss of productivity. While fixed transition costs like hiring, termination, and interview costs are important the
loss of productivity during the transition period between losing one salesperson and replacing him/her with
another is the single largest source of financial loss that impact an organization on the whole from attrition.
This is especially pronounced when you have a recurring revenue model. Loss of productivity happens during the
ramp-down period (reduced productivity period before a rep quits), the ramp-up period (reduced productivity period
just after a new rep joins), and the vacancy gap (zero productivity period when no rep exists in the seat).

Incentius has created a standard cost of attrition model which you can tweak to your needs by using attrition
parameters you’ve estimated for your salesforce. Use it to estimate the budgetary impact of attrition within your
salesforce. Also, estimate the savings you’ll create by reducing attrition by a certain level. This helps you
allocate a budget to HR activities that help reduce attrition.

Access the model here

Read more posts about other HR analytics here – http://www.incentius.com/blog-posts/analysis-of-internal-vs-external-hiring-decisions/

Would you like to get a sample salesforce attrition model for your company? Please email us at
info@incentius.com

And if you got this far, we think you’d like our future blog content, too. Please subscribe on the right
side.

Gauge IPL teams overall performance consistency using analytics (Cricket 2008 – 2015) Sujeet Pillai May 4, 2015

 

IPL Special Blog: Analyzing team performance

IPL season is upon us and the madness is in full flow. Along with the IPL comes the inevitable discussions on
which team is better, who is more consistent, etc. Here at Incentius we want to take an analytical stab at this
question. How can we compare teams across the matches that they’ve played. Note that we want to look at overall
consistency and not just winning performance. If a team loses a close match it should be still rated higher than a
team that lost a match by a big difference. Similarly winning teams should get more credit if they had a big
victory as opposed to those who won close victories.

Normalizing scores

The core question here is one of normalizing scores in a game. We’ll do it on a scale of 0-1. For example if team
A defeats team B in a match, team A would get a ‘performance metric’ of say 0.60 and team B would receive a
performance metric of 0.4 (Hence adding up to 1). The greater the extent of the victory the higher team A’s metric
would be and the lower team B’s metric would be. The normalizing to 1.0 ensures that every game is equally
weighted. This is the approach we choose to take. If we wanted to weight each game differentially, say by how long
back that game was, we could do that as well. Or we could weight each game by the total performance of both teams.
That would give high scoring matches a higher weightage than low scoring matches.

Measuring extent of performance

The second question is how do we define the extent of the victory. Obviously the number of runs scored by a team
should be part of it. Similarly the total number of wickets lost in an innings should be part of it. So a team
that scores 150 in an innings by losing 5 wickets in 12 overs is better than a team that scores 150 in an innings
by losing 7 wickets in 12 overs.

Strategy for measurement

So what strategy should we use for such measurement? Let’s use Duckworth-Lewis method in a reverse fashion. The
Duckworth-Lewis method is a statistical method that estimates the number of runs an average cricket team is
expected to make based on the number of resources they have at hand. In cricket’s case the ‘resources’ are number
of overs to be bowled and number of wickets in hand. Essentially Duckworth-Lewis is a matrix that contains number
of wickets in hand on one axis and number of overs left to be bowled in the other axis. The matrix values of
intersecting points tell you how many runs the team is expected to make. Read more about the Duckworth-Lewis
method and its application at it’s wikipedia
page

The Duckworth-Lewis method is primarily used for 50 overs matches. However, luckily for us Rianka Bhattacharya,
Paramjit S. Gill and Tim B. Swartz have calculated the T20 table for the same in their paper available here. Here is their table:

Overs/Wickets 0 1 2 3 4 5 6 7 8 9
20 1 0.969 0.93 0.879 0.813 0.722 0.599 0.448 0.297 0.176
19 0.956 0.909 0.877 0.83 0.769 0.683 0.565 0.42 0.272 0.153
18 0.917 0.867 0.829 0.787 0.732 0.654 0.542 0.402 0.257 0.139
17 0.877 0.823 0.789 0.738 0.697 0.628 0.522 0.387 0.246 0.128
16 0.835 0.782 0.753 0.705 0.664 0.602 0.503 0.374 0.235 0.12
15 0.792 0.743 0.709 0.669 0.626 0.574 0.484 0.362 0.227 0.112
14 0.751 0.707 0.673 0.637 0.593 0.546 0.464 0.35 0.218 0.105
13 0.715 0.674 0.636 0.603 0.562 0.515 0.443 0.338 0.21 0.098
12 0.683 0.637 0.602 0.568 0.529 0.475 0.419 0.326 0.202 0.091
11 0.65 0.599 0.566 0.533 0.497 0.439 0.393 0.313 0.194 0.085
10 0.613 0.56 0.526 0.501 0.46 0.408 0.361 0.3 0.186 0.079
9 0.579 0.523 0.479 0.461 0.425 0.378 0.331 0.283 0.177 0.072
8 0.54 0.483 0.443 0.417 0.389 0.349 0.302 0.261 0.167 0.066
7 0.493 0.442 0.402 0.374 0.354 0.321 0.272 0.234 0.157 0.059
6 0.417 0.385 0.357 0.33 0.317 0.29 0.242 0.2 0.145 0.052
5 0.362 0.334 0.31 0.286 0.273 0.255 0.215 0.17 0.122 0.044
4 0.308 0.28 0.261 0.241 0.224 0.207 0.183 0.142 0.1 0.035
3 0.254 0.228 0.211 0.194 0.177 0.165 0.144 0.116 0.079 0.025
2 0.197 0.172 0.155 0.141 0.127 0.119 0.106 0.093 0.062 0.016
1 0.137 0.113 0.097 0.085 0.073 0.067 0.06 0.052 0.042 0.009

Calculating performance

Now, how do we use the Duckworth-Lewis table to calculate performance for a match. What, we’re going to do is
calculate an ‘effective score’ for each team. i.e. if a team finished their innings without using up all of their
wickets and all of their overs, we’re going to argue that they could have used those remaining overs & wickets
to score extra runs. The number of extra runs they could have made is defined by the D/L table.

Hence, if in an innings a team is all out or finished all 20 overs:

effective_score = actual_score

If however a team has still wickets in hand and overs left to play:

effective_score = actual_score + extra runs(Wickets in hand/Overs left -> looked up on D/L Table)

Once the effective_score of each team is calculated, we calculate the performance metric by using:

performance_metric_team_a = team_a_effective_score/(team_a_effective_score + team_b_effective_score)

Analyzing performance

Now that we’ve calculated the performance metric of every team for every match, let’s ask the data questions.

Consistency

Which teams have been most consistent across 2008-2014.

Chennai is the highest, no surprises there. Mumbai comes in second. KKR despite their two wins are 5th, explained
by their dismal performance in 2009. Kings XI have’nt won a single IPL but come in third.

* Note that Sunrisers Hyderabad and Deccan Chargers have been merged together.

Biggest Win

Which match had the greatest win in IPL history in terms of relative effective score? Funnily it seems the first
ever match of the IPL was the biggest victory. Remember Brendan
McCullum murdering the RCB attack

Kolkata Knight Riders 222/3 (20/20 ov); Royal Challengers Bangalore 82 (15.1/20 ov)
Performance Metric – KKR = 0.7302, RCB = 0.2697

Then comes the
second match of IPL 2009
:

Royal Challengers Bangalore 133/8 (20/20 ov); Rajasthan Royals 58 (15.1/20 ov)
Performance Metric – RCB = 0.6963, RR = 0.3037

In third is the 1st
Semi-final of IPL 2008
:

Rajasthan Royals 192/9 (20/20 ov); Delhi Daredevils 87 (16.1/20 ov)
Performance Metric – RR = 0.6882, DD = 0.3118

Head to head performance

How do teams usually perform when pitted against each other.

It seems Chennai vs Delhi is the strongest performance in favour of Chennai. Next comes Chennai vs Hyderabad also
in favour of Chennai. Third is Mumbai vs Kolkata in favour of Mumbai.

Hope you had as much fun reading this blog as we had writing it. Check out some of our other blogs at http://www.incentius.com/blog

Drop us a mail at info@incentius.com to get in touch. Also don’t
forget to subscribe to this blog on the right to get future blog posts like this one.

How to analyze employee attrition – HR analytics Sujeet Pillai October 8, 2014

 

An issue that every company deals with is attrition. Sales being an especially high attrition function makes this analysis paramount. Sales attrition is a result of several components including unoptimized sales compensation, unrealistic quotas, ineffective mentoring, career-path ambiguity, training inefficacy or just bad recruiting. Hence the ability to slice and dice sales attrition many ways to understand trends and their root-causes can seriously help sales leadership make the required changes to build a healthier more performing sales force.

Numerically analyzing attrition is a bit tricky. This stems from the fact that the base of employees is continually in flux. Every month new hires join the salesforce, some employees are involuntarily terminated, some voluntarily leave the company and some others go inactive without leaving the company like when they go for a long-term leave of abscence. Additionally quality of attrition is important. Let’s say two companies of about the same size lose about 25 salespeople a month. Are they experiencing the same problem? What if one company is losing more experienced salespeople whereas the other was losing mostly salespeople only 2-3 months in the company. Both these companies have wildly different problem. The first one may have an issue like their sales compensation program not rewarding top performers enough but the other one may have a recruiting issue since new hires are probably not relating their job to what they were told during recruiting.

There are myriad ways in which we can slice attrition. In this blog we’ll list a few methods:

Attrition rate

The rate of attrition or the inverse retention rate is the most commonly used metric while trying to analyze attrition. The attrition rate is typically calculated as the number of employees lost every year over the employee base. This employee base can be tricky however. Most firms just use a start of year employee count as the base. Some firms calculate it on a rolling 12-month basis to get a full year impact. This ratio becomes harder to use if your firm is growing its employee base. For example, let’s say on Jan 1st of this year there were 1000 employees in the firm. Over the next 12 months, we’ve lost 100 employees. Is it as straightforward as a 10% attrition rate? Where it gets fuzzy is how many of those 100 employees that were lost were in the seat on Jan 1st. Were all the 100 existing employees as of Jan 1st or were they new hires during the year that termed. Hence the attrition rate must be looked at in several views.

Existing employee attrition

This type of view asks the question “How many of my employees who worked here a year ago today have left”. This fixes the set of employees you’re looking at to just those that were employed 12 months ago. The figure below plots how many of those employees hired 12 months ago are still in their seat over the 12 months. The plot would always be a strictly decreasing curve. Here is a sample graph from a firm.

 

Such a view is more important to look at because it tends to show you where slightly more tenured employees leave. This leaves out those that joined and quit in 3 months which is more of a localized recruiting issue rather than a systematic issue in the company.

Total employee attrition

This type of a view displays total attrition by month. It does not discriminate as to which employees quit, whether they were new hires or they were 3 year tenured employees. The plot below shows how many employees quit each month over the last 12 months.

 

Employee Tenure

Closely related to employee attrition is total employee tenure. The more the employee attrition the less the average tenure of employees in the company. Let’s look at a few ways we can track employee tenure performance over time. First, let’s define what a tenured employee means in the context of your company. Some folks call this the break-even period as to the time it takes for an employee to mature. This may vary from 6 months to 1.5 years based on the complexity of the sales process, the tools involved and the product sales lifecycle.

Tenured employee proportion

First let’s look at the proportion of employees who are tenured vs those that are new. Higher the tenured proportion the better job the company is doing at retention. This also directly impacts sales performance because tenured employees tend to do better than new ones.

 

Tenured employee actual

In a growing company however just looking at the proportion of tenured employees to new is not enough. As new hires come in to increase the employee base the Tenured employee proportion would auto-decline. Hence we should also look at the actual number of employees who are tenured. Below is the graph of actual tenured vs new hire employee counts for a firm.

 

Notice in the recent months that both tenured and new hire counts are increasing. This means that while the company is hiring more to increase the new hire counts it’s doing a better job at retention as well since more tenured employees are staying on.

Batchwise churn analysis

Is every batch of new hires that join your company the same. Do they perform identically? A view of that can be very useful to study the ’employee lifecycle’ to see how they start off and how they perform as they mature. This information can be critical when cross-referenced with the hiring sources and may also be used for recruitment performance.

Below is a view of employees that were hired in a particular month and how that batch churned over their lifetime at the company. This is typically possible to do if we have atleast 2-3 years of company performance history.

 

Conclusion

HR analytics is an up and coming area that can make HR departments in companies highly data-driven and improve their efficiency manifold. Incentius can help build such HR analytics for firms to revolutionize their HR operations.

As a bonus, we have a free interactive tool to estimate the financial impact of attrition of a sales person. Would you like to get a sample employee attrition analysis for your company? Here is a free tool for your use. Feel free to download and try it out and do write to us at info@incentius.com if you need a detailed analysis of several other metrics.  

And if you got this far, we think you’d like our future blog content, too. Please subscribe on the right side.

 

Uncover the secrets of top sales people through analytics Sujeet Pillai September 4, 2014

 

What do top salespeople do differently? Can we analzye behavior of top performers using sales performance data to identify traits that can be used for coaching of lower performers? There is a lot of literature on how to improve salespeople efficiency by focusing on the softer aspects of sales coaching. What if there’s behavior that top performers indulge in without knowing it themselves. Maybe the data can talk to us.

Let’s take the example of a software firm. It sells three different types of products with the option of attached services to each of those products. These products are non-competing so multiple products can be sold to the same customer. Let’s say these products are an anti-virus solution, a firewall solution and an anti-malware solution. In short we’ll call them AV, FW and AM. Each of these product lines also have different flavours which are progressively priced. For eg. anti-virus solution may have home, SMB and enterprise editions.

Define Performance

First, let’s define performance. Without defining performance appropriately, how do we know who are the top performers? All the salespeople have assigned quotas and hence we’re going to simply use revenue achievement to quota as our performance measure. Hence people with high revenue achievement are top performers. Let’s quintile all the reps on the basis of revenue achievement and place them in the 5 quintile buckets.
We have identified our top performers. Let us now look at different types of analyses on the performance data of top two quintiles to understand behaviors that sets them apart from rest of the salesforce.

Portfolio split

First let’s analyze the split of product sales by quintile both in terms of units sold as well as revenue dollars.

Product split by quintile (Revenue)

Product split by quintile (# of units)

There is some variation in the split of product sales by quintile. The absolute units and revenue is much higher in the top quintile (which is obviously expected). However, there doesn’t seem to be a distinct trend to suggest that the top performers favor one product over the other. Hence there doesn’t seem to be any significant behavioral trend on the product portfolio split

Cross selling

Do top peformers sell more products on average to the same customer? This is commonly known as cross-selling. Let’s take a look at the split of one product deals, two product deals and three product deals sold by performance quintile.

Multi-product deal split by quintile

Now there’s a trend! It seems the top performers tend to sell multiple products to every second customer. It’s distinctly clear that the lower performers tend to sell much more single product deals than the top performers. There may be additional opportunity within their converted prospects that they may be leaving on the table. Hence if we encourage, incentivize and train our lower performers to do more cross-selling and ask the right questions to their prospects to uncover extra opportunities we can improve our overall sales performance. The company’s marketing department can also probably offer some customer discounts on second and third products on the same order to encourage this behavior further.

Up selling

Are the top performers selling higher revenue product flavors as comparied to the lower performers? Let’s take a look at average revenue per product line per customer. That should give us a good view of this behavior.

Average revenue per product line per customer by quintile

Hmm. It seems that the top performers are selling higher revenue product flavors as compared to the lower performers. Note that due to pricing differences between products comparing the average revenue levels between products is irrelevant. It’s not as significant a difference as the multi-product split but a trend exists to suggest better up-selling by the top quintiles. Hence there is some advantage for the company to gain by improving coaching and incentivizing up-selling. Certain types of discretionary manager-approved customer offers can also help this trend along.

Multi-location deals

Are top performers selling to larger corporate groups. This is a more pronounced possibility in SMB sales. Many times the owner of one McDonalds actually owns a set of franchises. This is especially possible in the franchise business. This offers the possibility of making one sales pitch and capturing several orders at once. Let’s look at the average number of orders per customer by quintile. Note that this may not be straightforward to analyze if your sales systems define an order as unique to one customer. It may require some special data-cleansing/mining to identify customer name patterns to identify them as the same customer.

Average orders per customer

So it seems about 1 in 10 customers that the top quintile sells to is a multi-location deal. This trend can be exploited by the company and help train the sales people to inquire with their prospects on other businesses that they may own. Again marketing can play a role here by incentivizing customers to purchase company products for all their businesses by offering discounts or some free services on multi-location deals.

Additional thoughts

We have only skimmed the surface with the analysis that we’ve done. For larger salesforces there may exist clusters of top performers within the top quintile that behave differently. For example, one cluster may go for volume and maximize their deal counts whereas another cluster may spend more time with the customer to sell higher revenue flavors. Their revenue achievement to goal performance may be the same but their path to get there may be different. Identifying such clusters can help in further understanding top-performer selling behavior and help in driving coaching requirements. Additionally, for larger salesforces it may be beneficial to decile performance rather than use quintiles. This helps isolate the top performers to a smaller sample and enhance visualization of trends.

Incentius can analyze your sales data and identify such behaviors that can help your company push sales performance.

Would you like to get a sample analysis of your sales data for your company? Please email us info@incentius.com

And if you got this far, we think you’d like our future blog content, too. Please subscribe on the right side.

Analysis of internal vs external recruitment decisions Sujeet Pillai July 18, 2014

 

Leaders often face the dilemma: Is it better to promote internally or hire externally? During the early growth phase,
this question becomes even more important because of inevitable conflict between maintaining internal culture vs.
hiring someone from outside to fulfil certain skill requirements. This decision eventually is driven by the ability
to mentor newly promoted employees using the right training programs, and sufficient leadership time. It also
depends on the organizational situation and if there is enough room for failure or not. Once the decision is taken,
the immediate next question is whether it will help the company in the long run?

There are obvious pros and cons associated with both the approaches. Internal movers have longer experience within
the firm and they are more likely to be ambassadors of the firm culture and have already acquired important
firm-specific skills that new hires will lack. New hires on the other hand bring in a desired skillset due to prior
experience, fresh perspective and insights from other companies/industries. In general,
internal promotions sends out a better signal to its employees about abundant growth opportunities in the company,
but on the other hand requires strong training process in place to help promoted employees acquire specific skills
required to be successful in the new role.

According to Wharton management professor Matthew Bidwell, in his research paper titled Paying More
to Get Less:
The
Effects of External Hiring versus Internal Mobility
(pdf), “External hires” get significantly
lower
performance evaluations
for their first two years on the job than do internal workers who are promoted into similar jobs. They also have
higher exit rates, and they are paid “substantially more.” About 18% to 20% more. On the plus side for these
external hires, if they stay beyond two years, they get promoted faster than do those who are promoted internally
.This behaviour to some extent can be explained using human psychology. Internal hires generally have better
knowledge about existing processes and also in general have better rapport with leaders. On the other hand,
external hires take some time to learn new processes, prove themselves and eventually build rapport with leaders.
Exit rates are also on the higher side for external hires because there is generally less acceptance for failure in
case of lack of rapport.

Overall, external hiring has grown much more frequent since the early 1980s, especially for experienced high level
positions and especially in larger organizations. “It used to be that smaller organizations always preferred external
hires due to lack of internal talent while big ones focused more on internal mobility. But now the pendulum has
shifted towards external hiring and away from internal mobility for large organizations as well.

What is the specific situation in your company? Let’s specifically look at sales positions. Analysis on performance
of salespeople based on actual sales can be of immense help to take better decisions in the future. For example,
let’s look at the sales performance for one particular role divided into internally promoted reps vs. external hires.
Let’s assume that the incentive for a sales rep is based on their monthly revenue quota achievement. Performance can
be differentiated / analysed using various metrics and visualizations.

Comparison of performance over months in new role

Fig 1: Average quota achievement across months in new role

The above graph helps us understand the initial performance level of internal hires vs external hires in the new role
. From the graph, we can observe that initial quota achievement % for the internal hires is better than the external
hires, while the external hires starts catching up around nine months into their new role. When doing such analysis
it would be better that you consider atleast 2 years of data so that we get a decent sample size to gauge the trend.

Retention Analysis

Fig 2: Retention rate in the new role across months

As it is visible from the above graph, retention rate of external hires on an average is higher in the first year in
role than the internal hires. This stands to reason as newly hired employees have a reluctance to quit within the
first year of their new jobs. Starting 18 months however, external hires probably start leaving faster due to various
reasons such as work culture differences, inability to meet quotas, and general differences with leaders, etc.

Percentage of promotions from the group over time

Fig 3: % of promotions from internal hire and external hire groups over time

The above graph analyzes the promotion behaviour across the 2 groups. From the visual,
observe that internal hires continue to fare better till about 2 years. After which the external hire group tend to
do better.

By comparing retention rate and promotion analysis together, we can understand if the money spent on hiring
externally is justified by the number of leaders produced from external hires.

This blogpost illustrates the approach using sales performance analyses rather than using performance evaluation
which may have some induced human bias element (mostly in favour of the internal hires). There are various other
metrics which can be used over time such as historical performance data to understand the impact of hiring decisions.