An example of a north star metric for a B2B SaaS product is Slack's number of messages sent per active team per week — a metric that captures real usage depth, correlates with retention, and tells the team whether the product is delivering its core value of frictionless team communication.
Most B2B SaaS teams default to revenue or MRR as their north star. This is almost always wrong. Revenue is a lagging indicator — it tells you what happened three months ago. A true north star metric is a leading indicator that predicts whether revenue will grow before it shows up in your books.
This guide provides real north star metric examples across B2B SaaS categories and a framework for choosing the right one for your product.
What Makes a Good North Star Metric for B2B SaaS?
A strong north star metric for B2B SaaS must satisfy five criteria:
- Measures customer value delivery — not just product activity
- Is predictive of revenue — leading, not lagging
- Is actionable — your team can run experiments that move it
- Is understandable — a sales rep and an engineer can both explain it
- Is honest — it cannot be gamed without actually delivering value
North Star Metric Examples by B2B SaaS Category
Project Management SaaS
- Asana: Number of tasks completed per active team per week
- Linear: Issues resolved per engineering team per sprint
- Monday.com: Boards updated per user per week
Why these work: They measure whether teams are actually using the tool to get work done, not just logging in.
Communication SaaS
- Slack: Messages sent per active team per week
- Loom: Videos watched to completion per shared video
- Notion: Pages collaborated on (2+ editors) per workspace per week
Why these work: Collaboration depth, not just creation, signals that the product is reducing friction rather than adding it.
Sales and CRM SaaS
- HubSpot: Deals moved to next stage per user per week
- Outreach: Sequences completed per SDR per week
- Gong: Calls analysed per sales team per month
Why these work: Sales tools must produce pipeline outcomes, not just call logging — measuring downstream deal activity proves the tool is driving revenue.
Analytics SaaS
- Amplitude: Charts shared per analyst per week
- Looker: Dashboards queried by non-technical users per week
- Mixpanel: Insights acted on (connected to experiment) per product team per quarter
Why these work: Analytics tools only deliver value when insights change decisions — measuring downstream action is far more honest than measuring queries run.
HR and Hiring SaaS
- Greenhouse: Candidates moved to offer stage per open role per month
- Lattice: Performance reviews completed on time per team
- Rippling: Time saved per payroll run per company
DevTools SaaS
- GitHub: Pull requests merged per developer per week
- Datadog: Alerts resolved per on-call engineer per incident
- LaunchDarkly: Feature flags activated per deployment per team
According to Shreyas Doshi on Lenny's Podcast, the most common north star metric mistake in B2B SaaS is choosing a metric that measures product engagement rather than customer outcome — a project management tool that tracks logins instead of tasks completed will optimise for habit, not value.
The North Star Metric Selection Framework
Step 1: Write your value promise
"We help [customer] achieve [outcome] by [mechanism]"
Step 2: Identify the moment value is delivered
What does the customer DO when they've gotten the value?
Step 3: Make that action measurable
Frequency × Depth × Breadth
Frequency: how often it happens
Depth: how completely it happens
Breadth: how many users/teams it happens across
Step 4: Test predictive validity
Do customers who score high on this metric churn less?
Do they expand (upsell) more?
Does it predict NRR 6 months out?
Step 5: Check actionability
Can your team run 3 experiments in the next quarter that
would plausibly move this metric?
Common B2B SaaS North Star Anti-Patterns
Revenue / MRR
Lagging indicator. By the time it drops, churn has already happened.
Monthly Active Users
B2B tools don't need daily engagement. A contract management tool used once a month per deal is delivering full value. MAU penalises low-frequency high-value tools.
Feature adoption rate
Measures what your product does, not whether customers succeed. A customer can adopt a feature and still churn.
NPS score
Input to improvement, not a north star. Customers give high NPS scores for all kinds of reasons unrelated to your product's core value delivery.
According to Gibson Biddle on Lenny's Podcast, the test of a good north star metric is whether you would accept a lower short-term revenue number in exchange for a higher north star metric number — if you wouldn't trade, it isn't really your north star.
How to Roll Out a North Star Metric
- Workshop it — bring PM, engineering leads, and data together for 2 hours to debate 3–5 candidate metrics
- Back-test it — run the metric against historical data and check whether high-scorers retained better
- Publish it — put the current value on a shared dashboard visible to all teams
- Report it weekly — include it in the weekly product review as the first number
- Cascade it — break the north star into team-level input metrics each sub-team owns
According to Lenny Rachitsky's writing on north star metrics, the single biggest predictor of whether a north star metric succeeds is whether the CEO and CPO will say no to an initiative that moves revenue but doesn't move the north star — without that commitment, the metric is decorative.
FAQ
Q: What is a north star metric for B2B SaaS? A: A north star metric is a single leading indicator that best captures whether your product is delivering its core value to customers. For B2B SaaS it should measure customer outcomes, not just product activity, and predict revenue 3–6 months before it shows up.
Q: What are examples of north star metrics for B2B SaaS? A: Slack uses messages sent per active team per week. Asana uses tasks completed per team per week. GitHub uses pull requests merged per developer per week. Each measures whether the product is actually delivering its value promise.
Q: Why is MRR a bad north star metric? A: MRR is a lagging indicator — it reflects past decisions, not future health. By the time MRR drops, churn has already happened. A leading metric like product usage depth predicts MRR 3–6 months in advance.
Q: How do you choose a north star metric for a B2B SaaS product? A: Start with your value promise, identify the moment value is delivered, make that action measurable, test whether high scorers churn less, and check whether your team can run experiments that move it.
Q: How many north star metrics should a product team have? A: One. Multiple north stars create prioritisation conflicts. Sub-teams can have their own input metrics that roll up to the one north star, but the company-level metric must be singular.
HowTo Steps
- Write your product value promise in one sentence: 'We help [customer] achieve [outcome] by [mechanism]'
- Identify the specific action customers take at the moment your product delivers that value
- Make that action measurable with frequency, depth, and breadth dimensions
- Back-test the candidate metric against historical data to confirm high scorers churn less and expand more
- Publish the metric on a shared dashboard and include it as the first number in every weekly product review
- Cascade the north star into team-level input metrics so every sub-team has a clear line of sight to the one company metric