RICE is a Product Management prioritization framework that originated at Intercom. Sean McBide, one of the Product Managers there at the time, co-developed the RICE framework to improve their decision-making process and solve several problems Product Managers were facing. You can read his original article on the RICE framework here.
McBide and his colleagues found that as a company the way they were prioritizing favored ‘pet projects' rather than the ideas which impacted the most customers. Further they found there wasn’t enough scrutiny on how proposed ideas impacted their goals. And lastly, Sean notes in his original article on RICE Prioritization was that many of the prioritization frameworks out there seldomly factored in things like confidence.
In the end, McBide and his colleagues decided to develop a prioritization framework themselves and the RICE Prioritization framework was formed.
The RICE framework is a form of prioritization scorecard that uses the criteria: Reach, Impact, Confidence and Effort, which by no coincidence, make up the acronym RICE.
Since the RICE framework is a prioritization scorecard each attribute is given a number and it formulates those into a final score known as the RICE Score.
Since its humble beginnings at Intercom, the RICE framework has become one of the most popular prioritization frameworks in product management. A recent survey done by ProdPlan, in 2020, found RICE to be the 5th most used prioritization framework.
The first criteria in the RICE framework is reach. Looking to address the challenge of giving favoritism to ‘pet projects', reach seeks to give a score to how many people you believe your idea will be impacted or provide value to.
For example, this means estimating how many users will use your new feature idea. By doing this we ensure that we are giving greater focus to the ideas that impact more users over others. For example, an idea that solves a problem faced by 80% of your users will have a much larger reach score than one that solves a problem for only 2%.
An idea that solves a problem faced by 80% of your users will have a much larger reach score than one that solves a problem for only 2%.
To give a Reach score you need to first estimate how many users will this reach? Next you need to apply a time horizon to that. This is important as ideas might impact more users over time whereas others may only provide a once-off benefit.
There is no rule for what time horizon you need to apply for reach, this will be dependent on your context and product. But it’s important to apply it consistently across all your ideas. i.e. if you decide that reach will be calculated based on how many users it will reach over a 3 month period then you need to maintain that 3 month time for all reach scores regardless of the idea.
For example, say you have 7k MAU (monthly active users). If you believe this idea will impact 60% of them over the month, then your Reach number will be 4200 users - and if you’re calculating this over a quarter it would be 4200 x 3 months = 12,600.
|Number of MAU||Percentage reached in month 1||Percentage reached in month 2||Percentage reached in month 3||Final Reach score|
|7000||10% = 700||25% = 1750||50% = 3500||4950|
How many users will this reach over x days/weeks/months?
Do I expect this to increase or decrease over x days/weeks/months? If so, by how much?
The next criteria in the RICE framework is Impact. Impact is about determining how much this idea will impact towards your chosen goal.
Impact ensures that all ideas are assessed against the current product goals/prioritized outcome/OKR. This means in order to use the RICE framework effectively you first need to have a defined outcome you want to achieve.
For example, say a team’s current OKR is to “increase conversion by 10%”, the question for impact becomes; “how much do I believe this idea will impact our conversion rate?”
Intercom’s guide on RICE prioritization provides a scoring system for Impact from ‘massive impact' to ‘minimal impact' with each having a corresponding score:
However don’t feel restricted to this scale. You can choose to use your own, whether that be a scale of 1-5, out of 10 and even a simple 1-3 scoring.
The third criteria in the RICE framework is Confidence. Confidence is all about how confident you are with the Reach, Impact and Effort scores you have given. This allows for ideas that we have greater confidence with, for example, ideas that we have data to support or have done appropriate discovery or user testing on to be favored over ideas which are close to a ‘wild guess'.
When calculating confidence ask yourself “how confident am I about the scores I have given for Reach, Impact and Effort?”
Confidence within RICE prioritization is defined as a percentage. As a guide, you can divide confidence up into:
- High confidence = 100%
- Medium confidence - 80%
- Low confidence = 50%
- Below 50% is a wild guess.
Since confidence is a percentage, what it does is penalize ideas where we have little confidence in the idea. For example, an idea may have really high impact and reach scores but if confidence is 50% then the final score will be half of what it would be if our confidence was 100%.
This makes confidence a great forcing function for performing SPIKES and discovery. The way you increase confidence is through learning.
Low confidence scores aren’t necessarily bad, nor do they exclude an idea by default. Rather, what confidence does is ensure that we immediately don’t take on work when we have low confidence in its success. Instead, we invest time gaining confidence by performing technical SPIKES and doing product discovery. This can be done whilst also working on higher confidence ideas, where we have more clarity.
By doing so we ensure that we are making data-informed decisions and that the scores in our RICE prioritization aren’t completely made up and when they are they should be appropriately scaled as a result.
The last component to the RICE prioritization framework is effort. Effort is exactly as the name suggests, it is the effort required to make this idea real.
By having effort as a criteria, our final RICE score compares ideas based on their potential ROI (return on investment).
Reach and impact are both focused on the value of the idea that dividing these scores by effort we get a ROI figure.
Effort can be calculated as ‘person-hours' or number of days/weeks/sprints, etc.
For example, perhaps an idea will take a team of 5 two weeks to complete. In person-weeks this would be 5 x 2 = 10. Similarly in weeks that would simply be 2.
Again, what every score you choose to use for effort ensures that it is consistently applied. Having an effort score on one idea which is in person-hours and another in number of sprints won’t work.
RICE prioritization is a formula that multiplies, reach, impact and confidence together and then divide that by effort. The output of this is known as a RICE score.
RICE score = (Reach x Impact x Confidence) / Effort
For each idea that you want to prioritize, ask you will need to give individual scores for reach, impact, confidence and effort.
From there multiple reach x impact x confidence and the divide that by effort. This will give you their RICE Score.
|Improve payments processing||7000||2||40%||4||1400|
When using RICE prioritization it’s typically best to set up a spreadsheet or use a tool which will assist you with performing the calculations so that you’re not constantly performing it manually.
With such wide adoptance, RICE prioritization is a go-to favorite for product managers globally when it comes to choosing a prioritization framework. There are many reasons why and they all link back to the challenges that McBide and his colleagues were facing at Intercom. No different than where most Product Managers find themselves in, they too are seeking a framework that will help them address similar challenges.
Unlike some prioritization frameworks, RICE prioritization can be applied consistently across even the most disparate ideas.
By using numbers and actual data, like with reach, the RICE framework removes a lot of the subjectivity that comes with prioritization. Although you cannot remove subjectivity completely, by putting values down on paper we start to alter the conversation. If someone says it’s a 3 for impact we can ask them why, and if they struggle to support their choice it is easier to adjust by either downgrading the rating or by altering the confidence score.
Lastly RICE prioritization encourages product management best practices of being data informed and focusing on outcomes through impact.
It’s important to remember the final RICE score isn’t supposed to prioritize perfectly for you. You should use the RICE framework as an input to your prioritization. This means spending time to interrogate the data.
For example, you may have two ideas that resulted in similar RICE scores. However on further inspection you find that the one that scores slightly higher has a confidence rating of 60% whereas the one that scored a mere 2 points less had a confidence rating of 80%. In this scenario which idea would be better to ultimately prioritize first? In most cases it would make sense to do the one with the higher confidence rating first although it scored lower.
This is because the scores aren’t far apart and when given a choice, in most cases we should work on the idea that has a higher confidence first as that would correlate to a greater chance of it giving you the results that you expected.
- Lean and agile have a lot of similarities. They both focus on rapid, iterative development in order to deliver value to customers faster and avoid wasted development time by producing unnecessary features.Read more
Use Cases vs User Stories. What Are The Differences?Whilst user stories and use cases were both designed to describe the expected system behavior from a user's perspective they are two very different tools. And whilst they do share similarities they have far more differences. Let's dig in.Read more
A Simple Guide To The Product Discovery Process (with Examples)Product discovery helps product teams uncover customer problems and validate solutions — let's learn how to create an effective product discovery process.Read more