Skene
BLOG

Churn prediction for SaaS: how to spot at-risk accounts before they leave

How to identify at-risk SaaS accounts using product usage signals, support data, and engagement patterns before they churn.

·bySkene
Summarize this article with LLMs

By the time a customer asks to cancel, the battle is already lost. Cancellation requests, downgrade notices, and angry support tickets are lagging indicators -- they tell you what already happened, not what is about to happen.

The SaaS companies that maintain low churn rates do not react to cancellations. They predict them. They identify at-risk accounts weeks or months before the customer decides to leave, and they intervene while there is still time to change the outcome.

This guide covers how to build a churn prediction system using product usage signals, how to score account health, and how to design interventions that actually prevent churn.

Why lagging indicators are too late

Most companies track churn reactively:

  • A customer submits a cancellation request
  • A CSM tries to save the account
  • Win-back success rate: 10-15%

This is the worst possible approach because by the time someone cancels, they have already:

  1. Mentally moved on -- they have evaluated alternatives, possibly started a trial elsewhere
  2. Stopped getting value -- their usage has declined to the point where paying feels unjustifiable
  3. Built a narrative -- they have a story about why the product does not work for them, and that story is hard to unwind

The alternative is to catch accounts during the decay phase -- the period between peak engagement and cancellation. This window is typically 4-12 weeks for B2B SaaS and is where intervention has the highest success rate (30-50% save rate vs. 10-15% at cancellation).

Leading indicators of churn

These are the signals that predict churn before it happens. They fall into five categories.

1. Declining login frequency

This is the most reliable single predictor. Track each account's login frequency on a rolling basis and flag significant declines.

How to measure it:

  • Calculate a 7-day rolling average of daily active users per account
  • Compare it to the account's 30-day average
  • Flag accounts where the 7-day average drops below 50% of the 30-day average

Why it works: Login frequency is a proxy for perceived value. When people stop logging in, they are either getting value elsewhere, their workflow changed, or they have deprioritized your product. All three are churn precursors.

Caveat: Some products have naturally variable usage (monthly reporting tools, quarterly planning software). Adjust your baseline to match your product's natural usage cadence.

2. Feature usage drop

Beyond login frequency, track whether customers are using the features that define your product's core value.

What to track:

  • Number of core actions per week (messages sent, reports created, tasks completed)
  • Breadth of feature usage (are they using fewer features than before?)
  • Depth of feature usage (are they doing less within each feature?)

A customer who still logs in daily but has stopped using your core features is at higher risk than their login data suggests. They may be logging in out of habit or to check one minor thing, not because they are getting real value.

3. Support ticket sentiment shift

Support tickets contain rich signals about customer health, but most companies only look at volume. Sentiment matters more.

Positive signals (lower churn risk):

  • Feature requests ("Can you add X?") -- the customer is invested in your product's future
  • How-to questions ("How do I do Y?") -- the customer is trying to get more value
  • Integration inquiries -- the customer is deepening their commitment

Negative signals (higher churn risk):

  • Complaints about reliability or bugs -- the customer is losing trust
  • Requests to export data -- the customer may be preparing to leave
  • Questions about billing, contracts, or cancellation policy
  • Tone shift: previously engaged customer becomes terse or frustrated

4. Payment and billing signals

Financial signals are among the most predictive, yet many companies do not incorporate them into churn models.

Warning signs:

  • Payment failures: Even if eventually resolved, failed payments correlate with churn. The customer may be questioning the expense.
  • Downgrade requests: A customer who downgrades from a higher tier is signaling that the perceived value has decreased.
  • Billing inquiries: Questions about invoices, pricing, or cost reduction indicate budget scrutiny.
  • Unused entitlements: Paying for 50 seats but only using 20 means the customer will eventually notice and either downgrade or leave.

5. Reduced team engagement

For multi-user products, tracking team-level engagement is critical. A champion leaving or team adoption declining often precedes churn.

What to track:

  • Active user count trend: Is the number of active users on the account growing or shrinking?
  • Champion activity: Is the primary user (the person who signed up or manages the account) still active?
  • New user additions: Have they stopped adding team members?
  • Admin activity: Is someone still configuring, customizing, and managing the product?

When the champion leaves and no one takes over admin responsibilities, the account is in immediate danger.

Building a simple churn risk model (no ML required)

You do not need machine learning to predict churn effectively. A score-based system using 5-6 signals will catch the majority of at-risk accounts. Here is how to build one.

The scoring model

Assign risk points for each signal. Higher total score = higher churn risk.

Login frequency decline (0-25 points):

  • 7-day average is 70-90% of 30-day average: 5 points
  • 7-day average is 50-70% of 30-day average: 15 points
  • 7-day average is below 50% of 30-day average: 25 points

Core feature usage decline (0-20 points):

  • Core actions down 20-40% vs. previous month: 5 points
  • Core actions down 40-60%: 12 points
  • Core actions down 60%+ or zero core actions this week: 20 points

Team engagement (0-20 points):

  • Active users declined by 1-2: 5 points
  • Active users declined by 3+: 10 points
  • Champion user inactive for 7+ days: 10 points (additive)

Support sentiment (0-15 points):

  • Negative sentiment support ticket in last 30 days: 8 points
  • Data export request: 10 points
  • Cancellation policy inquiry: 15 points

Billing signals (0-15 points):

  • Payment failure in last 60 days: 8 points
  • Downgrade in last 90 days: 10 points
  • Unused entitlements (using less than 50% of paid capacity): 5 points

Engagement trend (0-5 points):

  • Three consecutive weeks of declining usage: 5 points

Total possible: 100 points

Setting risk tiers

Risk LevelScore RangeAccounts (typical)Intervention
Low risk0-2060-70% of accountsAutomated monitoring
Medium risk21-4515-25% of accountsTargeted outreach
High risk46-705-10% of accountsHuman intervention
Critical risk71-1001-3% of accountsImmediate action

Calibrate these ranges based on your actual churn data. Score your last 50 churned accounts retroactively to validate that the model would have flagged them.

When to use ML-based prediction (and when it is overkill)

ML churn models become valuable when you have 1,000+ customers, 50+ data points per account, and your score-based model has plateaued. ML is overkill with fewer than 500 customers, very low churn rates (under 2% monthly), or before you have built a basic scoring model. For most early and mid-stage SaaS companies, a well-tuned scoring model outperforms a poorly-trained ML model.

Intervention playbook by risk level

Identifying at-risk accounts is only half the job. The other half is intervening effectively.

Low risk (score 0-20): Automated monitoring

Continue standard engagement, monitor trends, drive deeper adoption through in-app prompts, and collect feedback via NPS surveys.

Medium risk (score 21-45): Targeted outreach

Send automated re-engagement emails referencing specific underused features. Use in-app nudges and educational content based on their usage patterns. Offer a quick walkthrough of relevant features.

High risk (score 46-70): Human intervention

Personal outreach from a founder or product lead -- not a template. Acknowledge the usage decline, ask what changed, offer a call, and provide immediate value (a custom configuration, a workaround, a direct fix).

Critical risk (score 71-100): Immediate action

Phone call or direct message without waiting for email replies. Executive outreach carries weight. Offer concessions if appropriate. If they have decided to leave, make offboarding smooth -- a good exit experience leaves the door open for return.

Connecting churn signals to product improvements

The patterns in your churn data are a product roadmap. Analyze churn by feature gap (which features do churned customers underuse?), by segment (is churn concentrated in specific company sizes or acquisition channels?), and by onboarding completion (is there a customer health score threshold below which churn is almost certain?). Feed these insights back to your product team. The best churn prevention is building a product that people cannot imagine working without.

Benchmarks: what good early warning looks like

  • Detection rate: Flag 60-80% of accounts that eventually churn, at least 4 weeks before cancellation
  • False positive rate: Expect 20-30% of flagged accounts to not churn. Above 50% means your model is too sensitive
  • Intervention success rate: Medium-risk outreach should save 20-30%; high-risk human intervention 15-25%; critical-risk 5-15%
  • Time to detection: Aim for at least 30 days before projected churn. Mature models achieve 60-90 day lead times

Building a retention cohort analysis

Complement your churn prediction model with cohort analysis. Group customers by signup month, track retention at Month 1, 3, 6, and 12, and segment by acquisition channel, plan type, and onboarding completion. If M3 retention improves with each cohort, your product is getting better. If it is flat despite growing signups, you have a product-market fit problem that no amount of churn prediction can fix.

Getting started

  1. Pick your top 5 signals. Login frequency, core feature usage, team engagement, support sentiment, and billing status.
  2. Build a spreadsheet. Score your last 30 churned accounts retroactively.
  3. Set thresholds. Define risk ranges based on your retroactive analysis.
  4. Automate scoring. Build a weekly script or dashboard that calculates risk scores.
  5. Define interventions. Write the playbook for each risk tier.
  6. Measure outcomes. Track intervention success rate and adjust based on results.

Churn prediction is about creating a systematic habit of monitoring account health and intervening before the point of no return. A simple model that you actually use beats a sophisticated one sitting in a data warehouse.

Done with this article? Explore more ways to ship real PLG.