The RICE prioritization framework is one of the most battle-tested tools in a product manager's arsenal. If you have ever watched a roadmap meeting devolve into a competition between who can speak the loudest, RICE is the antidote. Developed by Intercom, RICE gives every feature a numerical score so that gut feelings lose to data.
RICE stands for Reach, Impact, Confidence, and Effort. The formula is simple: RICE Score = (Reach x Impact x Confidence) / Effort. Higher scores win. Let's break down exactly what each factor means and how to calculate it without fooling yourself.
Reach: How Many Users Actually See This?
Reach is the number of users or customers who will be affected by this feature in a defined time window — typically one quarter. The key word is actually. Pull the number from your analytics, not from optimistic projections.
For a B2B SaaS company with 5,000 monthly active users:
- A fix to the onboarding flow that 80% of new users hit: Reach = 4,000
- A power-user bulk export button used by 3% of users: Reach = 150
Many PMs make the mistake of using total user count instead of the affected segment. Be surgical. If a feature only affects paid users and you have 500 paid users, Reach = 500, not your full 10,000 signups.
When You Do Not Have Enough Data
For new features with no historical data, use comparable features as a proxy. Launched a similar feature last year? Use its adoption rate as your baseline. No comparable? Flag it with low Confidence (more on that below) and use conservative estimates.
Impact: How Much Does It Actually Move the Needle?
Impact measures how much this feature helps each individual user it touches. Intercom uses a five-point scale:
- 3 = massive impact (transforms the user experience)
- 2 = high impact (significant improvement)
- 1 = medium impact (noticeable improvement)
- 0.5 = low impact (minor convenience)
- 0.25 = minimal (barely noticeable)
Here is the uncomfortable truth: most PMs rate too many features at 2 or 3. If everything is high impact, nothing is. Force yourself to explain why a feature is a 3 before giving it that score. What user problem does it solve? What does the user do today without it? How painful is that workaround?
The Test for a Score of 3
A feature deserves a 3 if users would pay extra for it, actively complain about its absence, or if it unblocks a core workflow they cannot complete without it. Features that are nice-to-have are 1s. Period.
Confidence: The Honesty Score
Confidence is where the RICE framework separates rigorous PMs from optimistic ones. It is expressed as a percentage reflecting how certain you are about your Reach and Impact estimates:
- 100% = multiple data sources confirm the estimates (user research, A/B tests, sales data)
- 80% = strong qualitative signals from user interviews or support tickets
- 50% = some signals but mostly inference
- 20% = gut feeling, no data
Low Confidence does not mean do not build it. It means the feature needs more discovery before it gets a high priority slot. A 20% Confidence rating is a flag that you need a user interview or a prototype test, not a sprint.
Effort: The Full Cost, Not Just Engineering
Effort is measured in person-months — the total work across all roles required to ship. Include design, engineering, QA, and PM time. A feature that takes one engineer a week plus a designer a day is roughly 0.3 person-months.
Common mistakes:
- Using only engineer estimates: Design, PM scoping, and QA add 30-50% on top.
- Ignoring maintenance: Features that require ongoing support (dashboards, ML models) have hidden ongoing effort.
- Optimism bias: Add a 20% buffer to every estimate. You will thank yourself later.
Scoring in Practice: A Real Example
Imagine you are the PM for a project management tool. You have three candidates:
Feature A — CSV Export: Reach = 1,200, Impact = 2, Confidence = 90%, Effort = 0.5 RICE = (1200 x 2 x 0.9) / 0.5 = 4,320
Feature B — AI Meeting Summary: Reach = 300, Impact = 3, Confidence = 40%, Effort = 4 RICE = (300 x 3 x 0.4) / 4 = 90
Feature C — Keyboard Shortcuts: Reach = 500, Impact = 1, Confidence = 80%, Effort = 0.5 RICE = (500 x 1 x 0.8) / 0.5 = 800
CSV Export wins by a landslide — not because it is exciting, but because many users need it, you are confident it will help, and it is fast to ship. The AI Summary sounds compelling in a pitch deck but the low Confidence score reflects that you do not actually know if users will adopt it.
When RICE Gets It Wrong
RICE is not a crystal ball. It consistently undervalues:
- Foundation work: Refactoring, infrastructure, and developer tools score poorly but enable every future feature.
- Strategic bets: A feature that only affects 50 enterprise accounts but could unlock a $1M contract looks terrible in RICE.
- Network effects: Features that grow more valuable as more users adopt them are hard to capture in a static score.
The fix: run RICE for your tactical backlog, but maintain a separate strategic bucket for investments that do not fit the formula. Make that trade-off explicit rather than forcing everything through the same scoring system.
How to Roll Out RICE Without Losing Your Team
Introducing any new framework meets resistance. Here is what works:
- Score your last five shipped features retroactively. Did RICE agree with what you built? Where did it diverge? Use this as a team calibration exercise, not a postmortem.
- Run one sprint cycle with RICE as input, not decision. Present the scores alongside the team's intuitions. Discuss the gaps.
- Agree on your scales together. Your definition of Impact = 2 should match your designer's definition. Misaligned scales produce meaningless scores.
Ready to practice prioritization and other core PM frameworks every day? PM Streak's daily challenges give you one real PM scenario per day with structured feedback. Three minutes a day is all it takes to internalize frameworks like RICE until they become instinct. You can also explore specific PM topics on demand.