How to define the scope of a product MVP requires identifying the single riskiest assumption about your product, then building the minimum set of functionality that tests only that assumption — not the minimum set that satisfies all stakeholders.
Most MVPs are too big. They include features that stakeholders requested, edge cases the team worried about, and "while we're at it" additions that accumulated during planning. By the time they ship, they're expensive to learn from, expensive to change, and expensive to kill.
A well-scoped MVP answers one question: is our riskiest assumption about user behavior true? Everything else is scope creep.
Start With the Riskiest Assumption
H3: The Assumption Hierarchy
Before scoping, explicitly list every assumption your product depends on:
- Desirability: Do users want this? Will they change their behavior to get it?
- Viability: Can we build a business around it? Is the unit economics story real?
- Feasibility: Can we build it with our current or near-future capabilities?
- Usability: Can users figure out how to use it without hand-holding?
Most teams scope MVPs to test feasibility (can we build it?) when the actual risk is desirability (will anyone use it?). If desirability is your biggest unknown, your MVP should test desirability — not your ability to ship.
H3: Identify the One Riskiest Assumption
Write down: "Our product will fail if we're wrong about [blank]."
For most early-stage products, that blank is something like:
- "Users will switch from their current tool to ours"
- "Users will pay [price] for this"
- "The manual step we're automating is actually painful enough to drive behavior change"
Your MVP scope is defined by the minimum functionality needed to get a real answer to that question.
According to Lenny Rachitsky's writing on MVP design, most teams shortchange themselves by building for users they imagine rather than users they've observed. The most important input to MVP scope is an hour of unmoderated user research — watching someone try to solve the problem you're solving without your product.
The Feature Triage Method
H3: Categorize Features Into Three Buckets
For every feature under consideration, assign it to one of three buckets:
Bucket 1 — Validates the core assumption This feature is necessary to test whether the riskiest assumption is true or false. If you remove it, you can't answer the question.
Bucket 2 — Supports usability but doesn't test the assumption This feature prevents the experiment from being invalid due to confusion or UX friction, but doesn't test the core hypothesis.
Bucket 3 — Everything else Features that are nice to have, that users mentioned, that stakeholders requested, or that the team wants to build. These do not belong in the MVP.
Your MVP contains only Bucket 1 features plus the minimum Bucket 2 features needed to prevent confounds.
H3: The "Remove It" Test
For every feature in your MVP scope, ask: "If we removed this, would we still be able to test our riskiest assumption?"
If yes: remove it. If no: it stays.
This sounds obvious. In practice, most teams cannot bring themselves to remove features because of stakeholder pressure, team attachment, or fear of looking incomplete. The "remove it" test makes the trade-off explicit.
According to Shreyas Doshi on Lenny's Podcast, the most common cause of MVP scope creep is not stakeholder pressure — it's the team's own uncertainty. When you're not sure what the riskiest assumption is, you add features to hedge. The discipline of writing down your single riskiest assumption before scoping forces you to commit.
Common MVP Scope-Creep Traps
H3: The "Table Stakes" Trap
"We can't launch without X — users won't take us seriously."
This is true sometimes. Authentication, basic security, and GDPR compliance are real table stakes for B2B. But 80% of "table stakes" claims are actually about team comfort or perceived credibility, not user necessity.
Counter: "What would happen if a user encountered the product without X?" If the answer is "they'd be confused or annoyed but would still evaluate the core value," X is not a table stake.
H3: The Negative Space Trap
"What if a user does [edge case]? We need to handle that."
Edge cases are real. But MVP edge case handling should be: graceful error message, manual workaround, or explicit scope limitation ("beta — limited to X"). You do not need to build robust edge case handling before you know whether the core use case is valued.
H3: The Demo Trap
"We need it to look good for [investor/customer/stakeholder] demos."
Demo needs and learning needs are different. Demo scope optimizes for impression; MVP scope optimizes for learning. If you're building for demos, call it a demo, not an MVP — and recognize you've deferred the real learning.
According to Gibson Biddle on Lenny's Podcast, the most important question a PM can ask when an MVP scope is growing is "are we learning faster or slower with each additional feature?" More features make the experiment more expensive and the signal noisier — not cleaner.
Defining MVP Scope: Practical Output
Your MVP scope document should contain:
- The riskiest assumption — one sentence
- The success criterion — what user behavior proves or disproves the assumption
- The feature list — Bucket 1 and minimum Bucket 2 only
- What's explicitly out of scope — a "not-MVP" list is as important as the MVP list
- The learning timeline — how long you'll run the experiment before deciding
- The decision criteria — what result triggers "build further" vs. "pivot" vs. "kill"
FAQ
Q: What is an MVP in product management? A: A Minimum Viable Product is the smallest set of functionality that tests your riskiest assumption about a product — not the smallest shippable version of the full product.
Q: How do you decide what to include in an MVP? A: Identify your single riskiest assumption. Include only features that are necessary to test that assumption. Remove everything that doesn't directly validate or invalidate it.
Q: How do you prevent MVP scope creep? A: Apply the "remove it" test to every feature. If a feature can be removed without invalidating the core assumption test, remove it. Write your riskiest assumption explicitly before scoping begins.
Q: What is the difference between an MVP and a prototype? A: A prototype simulates the product for feedback or validation without production functionality. An MVP is functional software used by real users to test a real behavior hypothesis. Both are valid tools — they test different things.
Q: How long should an MVP take to build? A: There is no universal answer, but if your MVP takes more than 6 weeks for a feature-level hypothesis, the scope is likely too large. True MVPs should be cheap enough to kill without regret.
HowTo: Define the Scope of a Product MVP
- Write down your single riskiest assumption in one sentence — the belief your product will fail if proven wrong
- Define the success criterion — the specific user behavior that proves or disproves the assumption
- Triage every proposed feature into three buckets: validates the core assumption, supports usability without testing the assumption, or out of scope
- Apply the remove-it test to every feature in the MVP list — if removing it still allows you to test the assumption, remove it
- Write a not-MVP list explicitly naming features that were considered and deferred, with brief rationale
- Define the decision criteria before building — what result triggers build further, pivot, or kill so the team knows what winning looks like