OpinionX — Free Stack Ranking Surveys

View Original

How to calculate Impact for RICE Scoring

With the wrong data, prioritization frameworks can cause more harm than good. How do we define and calculate Impact accurately for RICE Scoring?

Written by Daniel Kyne, 04 Oct 2021

In January 2018, Intercom PM Seán McBride published a blog post explaining a prioritization method that would quickly become one of the top three most popular frameworks in product management.

RICE Scoring is a simple formula that uses four criteria — Reach, Impact, Confidence and Effort — to measure the "total impact per time worked" for each option a product manager could pursue.

Formula graphic from "Management: RICE Scoring Model for Prioritization" by Lazaro Ibanez on Medium

I'm not going to waste your time explaining what each of the four letters mean and how to calculate a RICE Score (that's been covered by a bunch of people already). Instead, this is a step-by-step guide for robustly calculating the impact criteria that you will assign to each roadmap option on your RICE spreadsheet.

Why is this important?

80% of features built are rarely or never used. In public tech companies alone, this poorly executed prioritization adds up to $29.5 billion in wasted resources every year.

For RICE Scoring, product managers are expected to know (down to two decimal places) what level of impact a feature will have on the company's objectives.

No product manager actually knows this — you're just expected to guess.

See this content in the original post

Intercom's original guide to RICE Scoring says this pretty clearly: "Choosing an impact number may seem unscientific. But remember the alternative: a tangled mess of gut feeling."

But for modern product teams, guessing just isn't good enough. You need accurate data to make the right decision.

Prioritization frameworks alone aren't going to fix this $29.6 billion problem — a RICE Scoring spreadsheet is only as reliable as the data that you put into it.

Here's how you can accurately measure impact for RICE Scoring in just 8 steps:

Step 1: Define Your Objective

See this content in the original post

In order to calculate impact, we must first define the objective we're working towards.

The objective for product-led companies (which tend to focus on addressing end-user needs for bottom-up adoption) is often simply to reduce friction for the end-users of their product.

Other objectives could include improving the activation rate of your self-service onboarding, converting more free trial users to paying customers, or boosting your monthly retention rate.

For the purpose of this guide, let's focus on 'removing friction from the end-user experience' as our impact objective.

See this content in the original post

Step 2: Identify Your Customer Segment


See this content in the original post

If your customers can be divided into subcategories, you've almost certainly got multiple customer segments using your product to solve different problems and facing very different user experience challenges.

To make progress towards your objective, you must identify which segment will move the needle most. For example, looking at our objective to 'remove friction from the end-user experience,' do we want to improve the activation rate of new users or improve the NPS score for our highest paying customers?

Segmentation examples:

  • Company Size: Small (1-49 employees), Medium (50-499) and Large (500+).

  • Region: NAM, LATAM, EMEA, APAC.

  • Job Function: Product Management, Sales, Marketing, Customer Success, Design...

  • Seniority: Entry, Middle, Senior, Exec...

  • Revenue: Freemium, Mid-Tier, Enterprise...

  • Industry: B2B SaaS, Medtech, Consumer, Travel, IoT, Web3...

At this stage, we need not necessarily exclude customers that fall outside our target segment. Instead, we need to simply define the characteristics of these segments so that we can later split each one into its own group to analyze independently of the rest of the customer base.

For this example, we're going to focus on revenue as our segmentation category, meaning we plan to split our results into freemium, premium and enterprise customers.

Step 3: Stack Ranking

See this content in the original post

We're going to use stack ranking to quantify the impact of each option on our list.

As a research methodology, stack ranking uses a combination of data science techniques (pairwise comparison voting, rating system algorithms, and weighted option distribution) to compare a list of options under a core question and rank them from highest to lowest.

That sounds far more complicated than it really is. All you need is a question, a list of options and a group of people to vote on them. Using a stack ranking tool like OpinionX will automate all the analysis and data science stuff for you.

Step 4: Question Time

See this content in the original post

This step is pretty easy — we turn our objective into a question for our customers.

Participants will be presented with pairs of statements, which they will have to pick between. Your question acts as the context for how they should judge each pair.

Our objective to 'remove friction from the end-user experience' can be translated into a question like 'Which issue is more frustrating when using our product?'

Example of a pair vote during an OpinionX stack ranking survey ^

Keep your question easy to understand and as closely related to your objective as possible.

Step 5: Import Your Options

See this content in the original post

Take all the options on your RICE Scoring list and import them as individual statements.

Depending on how you store these options on your roadmap or scoring spreadsheet, you might need to edit them before moving on to the next step.

Here are some writing tips:

  • Turn each option into a short statement, like "Not being able to export my results".

  • Each statement should be written concisely with only one main point/topic.

  • Keep the perspective of each statement consistent. Writing feature ideas alongside problem statements will make it hard for participants to compare each one objectively (we prefer using problem statements so that we can save solutioning for after we've identified our priorities problems).

  • Avoid including slang, jargon, or personal information in your statements.

Step 6: Engage Participants

See this content in the original post

Getting your research underway is really simple. Import your stack ranking question and list of options into your research tool.

We like to use our product OpinionX (which is free!) for stack ranking because it automates all the data science analysis and lets us engage large groups of users without any hassle.

On OpinionX, we also add in multiple-choice questions to identify each participant’s revenue tier and function within their company, so that we can use these data points to segment our results later on.

Finally, one last benefit of using OpinionX is that we can ask participants to submit extra statements if we're missing any important issues from our stack ranking list. All we have to do is include an open-response question in our survey setup and then we can add any participant-generated statements to the stack rank list in just one click.

Once we're ready to engage participants, we just copy the survey link and share it with our participants (usually via email, in-app message, or popup banner).

Step 7: Converting Results to Impact

See this content in the original post

No fancy calculations needed because the output you get from a stack ranking survey is super simple — all the options are automatically ranked from first to last based on what was most important to participants.

Rather than overcomplicating things with a conversion formula, I like to keep things simple by taking batches of the statements and assigning them with an impact score. Let's imagine that we were considering 100 statements that are ranked from most important (1st) to least important (100th), and that we use a range from 0.5 up to 5 for our impact score:

Impact 5 = 1st to 10th (top 10%)

Impact 4 = 11th to 30th

Impact 3 = 31st to 50th

Impact 2 = 51st to 70th

Impact 1 = 71st to 90th

Impact 0.5 = 90th to 100th (bottom 10%)

If you want to preserve the detail in stack ranking order, here's a more advanced formula you could use to get a real number (which can be rounded to two decimal places where needed):

Step 8: Time to Experiment

See this content in the original post

Prioritization frameworks like RICE Scoring help us to understand the opportunity cost of each potential option and trade-offs we need to make to pursue just a select few of them. Rather than filling your spreadsheet full of your own internal assumptions, stack ranking gives us a structured way to view in aggregate the trade-offs each person would make if they were given the choice.

Stack ranking can help us to understand people's biggest problems, the motivations influencing their behaviors, the values underpinning their decisions, and the features they're dying to get their hands on most. All you need to do is ask them.

Create your own stack ranking survey for free on OpinionX today and ditch those RICE Scoring assumptions for real, robust data instead.

See this content in the original post