Product Management· 7 min read · April 23, 2026

RICE, ICE, and When Both Are Wrong: A PM's Real-World Guide to Feature Prioritization

Learn how to use RICE and ICE scoring correctly, when to skip both, and the exact prioritization stack senior PMs use to defend roadmap decisions.

PM Streak Editorial·Expert-reviewed PM content sourced from 300+ Lenny's Podcast episodes

RICE, ICE, and When Both Are Wrong: A PM's Real-World Guide to Feature Prioritization

Every PM knows they should "prioritize ruthlessly." What that actually looks like at 9am on a Monday with 47 items in the backlog, an engineering lead asking for sprint picks, and a VP breathing down your neck about the roadmap — that is where frameworks earn their keep or fall apart.

RICE and ICE are the two most commonly used prioritization frameworks in product management. But most PMs are using them wrong: either applying them mechanically without understanding the assumptions baked in, or abandoning data entirely and going with gut. Here is the opinionated guide to actually getting prioritization right.

RICE: What It Is and Where It Shines

RICE was developed by the product team at Intercom to add rigor to internal roadmap decisions. The formula:

(Reach × Impact × Confidence) ÷ Effort = RICE Score

Each factor is scored as follows:

  • Reach: How many users will this affect in a given time period? Measure in actual users or events per quarter, not vague percentages. "1,200 users per quarter" is useful. "Most users" is not.
  • Impact: Scored on a fixed multiplier scale — 3 (massive), 2 (high), 1 (medium), 0.5 (low), 0.25 (minimal). This is a multiplier, so inflating Impact inflates everything downstream.
  • Confidence: How strong is your evidence? High = 100%, Medium = 80%, Low = 50%. If you are running on instinct alone, that is a 50%.
  • Effort: Total person-months of work across PM, design, and engineering. A one-week project = 0.25 person-months.

A Real RICE Calculation With Numbers

Say you are comparing two features for Q2 planning:

Feature A: Onboarding tooltip improvements

  • Reach: 800 new users per quarter
  • Impact: 2 (high — reduces churn in the first week)
  • Confidence: 80% (user research supports this)
  • Effort: 1 person-month
  • RICE Score: (800 × 2 × 0.8) ÷ 1 = 1,280

Feature B: Advanced reporting dashboard

  • Reach: 120 power users per quarter
  • Impact: 3 (massive to this segment)
  • Confidence: 100% (direct customer requests with signed contracts)
  • Effort: 4 person-months
  • RICE Score: (120 × 3 × 1.0) ÷ 4 = 90

RICE tells you Feature A is 14x more impactful per unit of effort, even though Feature B feels more impressive in a board meeting. This is RICE doing its job: surfacing what gut instinct obscures.

ICE: When Speed Beats Precision

ICE was coined by growth hacker Sean Ellis for scoring growth experiments. The formula:

Impact × Confidence × Ease = ICE Score

All three factors are scored 1 to 10. ICE is faster to apply than RICE (no quantitative data required) and was specifically designed for weekly growth experiment prioritization, not quarterly roadmap planning.

Use ICE when:

  • You are running a growth sprint and need to rank 20 experiments by Friday
  • You do not have user data yet (pre-launch or early stage product)
  • You need to align a room quickly without a 2-hour scoring session

Do not use ICE for:

  • Q3 roadmap planning — the absence of Reach means you cannot compare features affecting different user populations
  • Anything requiring engineering or leadership buy-in — ICE scores are too easy to game (a PM can justify almost anything with a 9/10 Ease score)
  • Decisions where effort variance is large — ICE does not distinguish between a 1-week and a 3-month build

The Three Situations Where Both RICE and ICE Fail

1. Strategic vs. Tactical Features

RICE and ICE do not account for strategic value. A feature that scores low on both might still be essential because it unlocks a new market, satisfies a contractual commitment, or signals technical capability to acqui-hire targets. Reserve 20 to 30% of your roadmap capacity for "strategic bets" that bypass scoring frameworks entirely — but require explicit executive sign-off and a written rationale.

2. When You Are Solving for the Wrong Metric

If your company's North Star Metric is net revenue retention, a feature that drives activation for free users will score high in RICE but miss the actual business goal. Always anchor your scoring to the metric that matters this quarter, not whatever proxy feels good. Revisit and realign your RICE parameters at the start of every planning cycle.

3. Binary Dependencies

Sometimes Feature C is worthless without Feature D. RICE scores them independently, so C might score low and get cut, leaving D orphaned. Before running any RICE scoring session, map out hard dependencies and bundle them into scored "epics" rather than individual features. Score the epic as a unit.

The Prioritization Stack That Actually Works

In practice, senior PMs at high-growth companies use a layered approach that combines frameworks with judgment:

  1. Filter for strategic fit — Does this align with this quarter's company OKRs? Cut anything that does not pass this filter before scoring.
  2. Apply RICE for the survivors — Score remaining items against each other using consistent parameters agreed on by your team.
  3. Override with documented judgment — Explicitly flag 1 to 2 items where strategy or customer relationships override the RICE score, and write down why. This protects you when the VP asks "why is Feature X not in the sprint?"
  4. ICE for growth experiments — Keep a separate backlog for growth and experiment ideas, scored with ICE on a weekly cadence outside of roadmap planning.

This is not a single framework — it is a stack. The goal is not mathematical purity; it is a defensible, repeatable process that your team and leadership can trust.

Avoiding the Most Common RICE Mistakes

Inflating Impact scores: Impact 3 (massive) should mean "this would be among the most significant improvements we ship this year." If you are giving Impact 3 to every feature you are excited about, the framework is useless. Reserve Impact 3 for at most one or two items per planning cycle.

Estimating Effort in hours instead of person-months: Hours make small tasks look artificially cheap and obscure the total team cost. Person-months force you to include PM planning time, design iterations, and QA — not just engineering sprint days.

Running RICE in isolation: RICE is a conversation tool as much as a calculation tool. The most valuable insight usually comes from explaining why you scored Reach at 500 instead of 2,000 — not from the final number. Run your RICE session live with engineering and design, not solo at your desk.

Never revisiting scores: The market changes. A feature scored in January may be dramatically more or less important by April. Treat your RICE scores as living documents, not decisions carved in stone.

Prepare for Roadmap Questions in PM Interviews

Roadmap prioritization is one of the most common execution questions in PM interviews. Interviewers want to see that you can apply frameworks while also knowing when to override them with judgment. Check out the interview prep resources on PM Streak to practice structuring prioritization answers under pressure — including the classic "how would you prioritize this backlog" scenario.

If you want a new PM challenge every day — including roadmap prioritization and feature trade-off scenarios — join thousands of PMs building their skills at PM Streak's daily challenge. Consistent practice is the fastest path from knowing frameworks to owning them in the room.

prioritizationRICE frameworkICE frameworkproduct roadmapfeature prioritization

Practice what you just learned

PM Streak gives you daily 3-minute lessons with streaks, XP, and a leaderboard.

Start your streak — it's free

Related Articles