How to run a design sprint for a new product feature compresses five weeks of design and user research into five days — Monday maps the problem, Tuesday sketches solutions, Wednesday decides on one direction, Thursday builds a prototype, and Friday tests it with real users — with the goal of answering a critical question before committing to build.
The design sprint framework was created at Google Ventures as a time-constrained alternative to months of product discovery. Its power is not speed for its own sake — it's the discipline of having real user feedback in five days instead of five weeks, at a fraction of the cost of building the wrong thing.
This guide walks through every day of the sprint, the facilitation decisions that make or break each phase, and when to use a sprint versus alternatives.
When to Use a Design Sprint
Design sprints are high-value for:
- New features with significant UX complexity and uncertain user behavior
- Major redesigns of existing workflows
- New product lines or pivots where the concept needs validation before engineering commitment
- Situations where stakeholders have strong conflicting opinions about the right approach
Design sprints are low-value for:
- Incremental improvements to existing, well-understood features
- Technical infrastructure work with no UX component
- Features where you already have clear user signal from existing behavior or research
H3: The Sprint vs. Alternatives Decision
| Situation | Best Approach | |-----------|--------------| | Complex, high-stakes new feature | Design sprint | | Incremental improvement with user signal | Prototype + usability test (2 days) | | Discovery of problems (not solutions) | User interviews + JTBD analysis | | Validating a positioning hypothesis | Landing page test | | Technical feasibility uncertainty | Engineering spike |
The Sprint Team
Ideal team size: 4–7 people
Required roles:
- Decider: The person with final authority on direction (product manager or head of product). When the team disagrees, the decider breaks the tie.
- Designer: Creates the prototype and facilitates the Tuesday-Wednesday design exercises.
- Engineer: Evaluates technical feasibility of solutions as they emerge. Prevents designing the unbuildable.
- Facilitator: Keeps the team on schedule and the exercises structured. Can be the PM or an outside facilitator.
Optional roles (invite as needed for 30-60 minutes):
- Customer success representative: brings direct customer context
- Marketing or sales representative: brings buyer context
- Domain expert (legal, compliance, etc.): for regulated features
Day-by-Day Guide
Monday: Map the Problem
Goal: Align the team on the problem, the user, and the long-term vision.
Exercises:
- Lightning talks (2 hours): Each expert shares context from their domain — product metrics, customer research, competitive analysis, technical constraints. 15 minutes per person.
- How Might We (1 hour): As experts talk, participants write "How Might We..." questions on sticky notes. "How might we make the approval workflow faster for remote teams?" These get voted on and clustered.
- Journey map (1 hour): Map the current user journey for the feature area. Mark the most critical moments and pain points.
- Sprint question (30 min): From the journey map, identify the single most critical question the sprint must answer. "Can we design an approval flow that doesn't require a meeting?"
H3: The Sprint Question is the Most Important Output of Monday
According to Lenny Rachitsky's writing on design sprints, the teams that get the most value from sprints almost always have a sharp sprint question — one that can be answered yes or no by Friday's user tests. "A vague sprint question produces a vague prototype and vague user feedback. A specific sprint question — 'will users trust an AI-generated summary enough to approve without reading the full document?' — produces a prototype that tests exactly that assumption."
Tuesday: Sketch Solutions
Goal: Generate many possible solutions before converging.
Exercises:
- Lightning Demos (1 hour): Each person shares 3–5 examples of products they find inspiring for this problem. Not just competitors — any product that solved a similar problem well. Capture "big ideas."
- Four-step sketch (3 hours): Each participant sketches their solution independently through four stages:
- Notes (20 min): Review Monday materials and take notes
- Ideas (20 min): Rough sketches, variations, explorations
- Crazy 8s (8 min): Fold paper into 8 sections; sketch 8 variations in 8 minutes
- Solution sketch (1 hour): One detailed, annotated, three-panel sketch of the best idea
Wednesday: Decide
Goal: Choose one direction from Tuesday's sketches and storyboard the prototype.
Decision process:
- Art museum (30 min): Post all solution sketches. Team does silent review with sticky notes, marking interesting parts.
- Heat map vote (20 min): Each participant places 1–3 dot stickers on parts of sketches they find most interesting. No discussion.
- Speed critique (30 min): Facilitator narrates each sketch for 3 minutes. No defending by the author.
- Straw poll (10 min): Each person votes for their preferred solution.
- Decider vote (10 min): The decider makes the final call, which may incorporate elements from multiple sketches.
- Storyboard (2 hours): Sketch all 10–15 frames of the prototype, frame by frame.
Thursday: Build the Prototype
Goal: Create a realistic-feeling prototype that can be tested on Friday.
Prototype principles:
- Fake it: The prototype doesn't need to work — it needs to feel real enough to get honest reactions
- Surface only: Build the screens users will see; don't build the back end
- Good enough: A prototype that takes 8 hours to build on Thursday beats a perfect one that takes 8 days
Tools: Figma (recommended), InVision, or high-fidelity wireframes. The goal is click-through prototypes, not static mockups.
According to Shreyas Doshi on Lenny's Podcast, Thursday prototype quality is the most common design sprint failure point. "Teams spend Monday through Wednesday creating a vision and then rush the prototype on Thursday. If the prototype is too rough to feel real, Friday's user feedback is about prototype quality, not the concept. Build something that a user would believe is a real product."
Friday: Test
Goal: Watch 5 users interact with the prototype and answer the sprint question.
Testing protocol:
- Recruit 5 users from your ICP (5 is enough to reveal patterns — more is diminishing returns)
- Each session: 60 minutes. Welcome and context (10 min), prototype interaction (35 min), debriefing questions (15 min)
- Rest of team watches via live stream or notes in a separate room
- After 5 sessions: team reviews notes and identifies patterns in a 2-hour debrief
What to look for:
- Did users understand the concept immediately?
- Where did users hesitate, get confused, or take an unexpected path?
- Did users complete the core task the sprint was designed around?
- What surprised users (positively or negatively)?
According to Gibson Biddle on Lenny's Podcast, the most valuable design sprint output is not the prototype that gets shipped — it's the user behavior you observed that surprised the team. "We ran design sprints at Netflix primarily to get surprised. If the users did exactly what we expected, we learned something useful. If they did something completely unexpected, we learned something invaluable."
FAQ
Q: How do you run a design sprint for a new product feature? A: Five days: Monday maps the problem and defines the sprint question, Tuesday sketches solutions independently, Wednesday votes on a direction and storyboards the prototype, Thursday builds a testable prototype, and Friday tests with 5 real users to answer the sprint question.
Q: How many people should be in a design sprint? A: 4 to 7 people. The required roles are a decider with final authority, a designer, an engineer, and a facilitator. Domain experts and customer-facing team members can be invited for 30 to 60 minute sessions on Monday.
Q: What makes a good design sprint question? A: A question that can be answered yes or no by Friday's user tests and that is specific enough to produce a prototype that tests exactly one assumption. A vague sprint question produces vague user feedback.
Q: When should you not use a design sprint? A: Incremental improvements to well-understood features, technical infrastructure work, or situations where you already have clear user signal. Use a 2-day prototype and usability test for smaller questions; use user interviews for discovery of problems not solutions.
Q: What should a design sprint prototype look like? A: A click-through prototype realistic enough to feel like a real product — not polished enough to ship, but credible enough to get honest user reactions. Built in one day in Figma or InVision, covering only the screens users will actually interact with.
HowTo: Run a Design Sprint for a New Product Feature
- Define the sprint question on Monday — the specific yes or no question the Friday user test will answer — because a sharp sprint question is the most important output of the first day
- Map the current user journey for the feature area on Monday and identify the most critical pain points and moments to design around
- Generate solutions independently on Tuesday using the four-step sketch process: notes, idea sketches, Crazy 8s rapid variations, and one detailed three-panel solution sketch
- Choose the direction on Wednesday through silent heat map voting, a speed critique of each sketch, and a final decider vote that can incorporate elements from multiple solutions
- Build a click-through prototype on Thursday realistic enough to get honest user reactions — good enough is better than perfect when it means users can interact authentically
- Test with 5 ICP users on Friday using 60-minute sessions and a 2-hour team debrief to identify patterns, surprises, and the answer to the sprint question