A product autopsy is most valuable when it answers three questions without blame: what did we believe would happen, what actually happened, and what was the earliest signal we had that we were wrong — and why did we miss or ignore it?
Product autopsies — also called post-mortems, retrospectives, or learning reviews — are among the most underinvested rituals in product development. Teams run them badly (as blame sessions), not at all (out of embarrassment), or too late (months after the signal was available).
Done well, a product autopsy is one of the highest-leverage activities a PM can run. The same mistakes repeat in every organization until someone systematically documents them and changes the process.
When to Conduct a Product Autopsy
Not every feature needs a full autopsy. Run one when:
- A feature or product is being sunset or significantly scaled back
- A launch missed its success metrics by 30%+ within the first 90 days
- A product bet consumed significant resources (3+ months of engineering) and failed to achieve its hypothesis
- A significant customer loss is attributable to a product decision
- A team had strong conviction about something that turned out to be wrong
Do NOT run an autopsy on every small A/B test or iteration. Reserve the process for decisions that had real stakes and produced learnable surprises.
The Four-Part Autopsy Structure
H3: Part 1 — Reconstruct the Bet
Before analyzing what went wrong, document what you believed when you made the decision. This is the most important and most neglected step. Memory is revisionist — people quickly convince themselves they had doubts all along.
Answer these questions using artifacts from the time (PRD, launch plan, design docs, stakeholder emails):
- What was the hypothesis? (If we build X, customers will do Y, leading to Z outcome)
- What evidence supported the hypothesis at the time?
- What confidence level did the team have? (Use a percentage)
- What were the known risks at the time?
- What would have caused us to not make this bet? (What evidence would have changed the decision?)
The goal is to reconstruct the decision as it was made — not as it looks in hindsight.
H3: Part 2 — Document What Actually Happened
Create a timeline of the actual outcome, including:
- Key metrics at launch, 30 days, 60 days, 90 days
- Qualitative signals (customer feedback, support tickets, sales team reactions)
- The moment the team first knew the hypothesis was in trouble
- Any pivots made during the period and their outcomes
Do not editorialize in this section. Just document what happened and when.
H3: Part 3 — Identify the Earliest Signal
This is the highest-leverage question in any autopsy: when was the first signal available that the hypothesis was wrong?
In most product failures, the signal was available weeks or months before the team acted on it. The autopsy reveals the gap between signal availability and signal acknowledgement — and that gap is where the real learning lives.
Common reasons teams miss early signals:
- Confirmation bias: interpreting ambiguous data in favor of the hypothesis
- Sunk cost: reluctance to change course after significant investment
- Organizational pressure: a VP sponsor making it politically costly to raise concerns
- Measurement gap: the metric that would have shown the problem wasn't being tracked
According to Shreyas Doshi on Lenny's Podcast, the most valuable output of any product post-mortem is not what we should have built — it's what information was available earlier that would have changed our decision, and why we did not act on it. That gap is a process failure, not a judgment failure.
H3: Part 4 — Generate System-Level Changes
The final and most critical step is converting learnings into system changes. Autopsy findings that produce only personal lessons have a half-life of 3 months. Findings that produce process changes outlast the people who ran the autopsy.
For each root cause identified, ask: what process, template, checklist, or ritual would catch this earlier next time?
Examples:
- Hypothesis was not validated with enough customer interviews before build → Add a validation gate to the PRD approval process
- Success metrics were not defined before launch → Add a pre-launch metrics confirmation to the launch checklist
- Early signals were dismissed because of VP pressure → Add a blind data review step before any course-correction decision
- Team had no clear kill criteria → Add explicit kill criteria to every product bet at kick-off
Running the Autopsy Session
H3: Who Should Attend
Keep the group small (5–8 people maximum) and include:
- PM who owned the product/feature
- Engineering lead who built it
- Designer who worked on it
- One customer-facing person (CSM or sales rep who had direct customer conversations)
- One stakeholder who was not involved in the decision (for fresh perspective)
Do NOT include the VP or C-level executive who originally sponsored the bet. Their presence creates political distortion. Share findings with them after the session.
H3: Facilitation Guidelines
- Share the reconstructed bet (Part 1) in advance so everyone has shared context before the session
- Start with Part 2 (what happened) before moving to root cause — keep fact-finding and interpretation separate
- Use a talking stick or structured turn-taking to prevent dominant voices from shaping the narrative
- Ban the phrase we should have known better — replace with what information would have changed the decision?
- Document everything in real time on a shared doc; do not rely on memory or notes taken after the session
H3: The Blame-Free Framing
The facilitator's most important job is maintaining a blame-free environment. The goal is not to identify who made a bad call — it's to identify what conditions allowed a bad call to go undetected for so long.
Reframe individual failures as system failures: instead of the PM should have noticed the retention drop sooner, ask what would a good monitoring system have surfaced that information automatically?
Documenting and Sharing Autopsy Findings
Write a 1–2 page autopsy document and share it with:
- The product team (all PMs, designers, engineers who might face similar decisions)
- Your manager and skip-level (so organizational patterns are visible)
- New team members during onboarding (institutional memory)
The document should include:
- The original hypothesis (verbatim, with confidence level)
- What happened (timeline with metrics)
- The earliest available signal and why it was missed
- Root causes (process, measurement, organizational)
- System changes made as a result
Store it in a shared product wiki with tags so it can be found when someone is about to make a similar bet.
FAQ
Q: What is a product autopsy? A: A structured post-mortem conducted after a product failure, sunset, or missed launch goal — designed to identify what was believed, what happened, why early signals were missed, and what process changes will prevent the same failure next time.
Q: How is a product autopsy different from a sprint retrospective? A: A sprint retro reviews team process and workflow at the end of each sprint. A product autopsy reviews a strategic product bet — it focuses on hypothesis quality, signal detection, and organizational decision-making, not sprint velocity or team rituals.
Q: How long should a product autopsy take? A: 90 minutes for the live session. 30 minutes of preparation (distributing the reconstructed bet in advance). 1–2 hours to write the final document. Total: half a day for a high-stakes product failure.
Q: Should executives attend product autopsies? A: No. Their presence creates political distortion and inhibits honest discussion. Share findings with them after the session in a written summary.
Q: What is the most important output of a product autopsy? A: System-level process changes that will catch similar failures earlier next time. Personal lessons without process changes have a half-life of 3 months.
HowTo: Conduct a Product Autopsy
- Reconstruct the original bet using contemporaneous artifacts — PRD, design docs, stakeholder emails — to document the hypothesis, evidence, confidence level, and known risks at decision time
- Document the actual outcome timeline with metrics at launch, 30, 60, and 90 days without editorializing
- Identify the earliest available signal that the hypothesis was wrong and analyze why it was missed or ignored
- Run the autopsy session with 5 to 8 people in a blame-free environment, sharing Part 1 in advance and separating fact-finding from root cause analysis
- Convert each root cause into a specific system change: a process gate, checklist item, or monitoring rule that would catch the same failure earlier
- Write a 1 to 2 page document and store it in a shared product wiki so institutional memory outlasts the people who ran the autopsy