TL;DR
I ran Skene's PLG skills on Pitchkit, my pitch deck builder. The analyze command scanned the codebase and identified decision paralysis on the creation page as the #1 growth blocker. The plan command recommended a lightweight onboarding survey to route users to the right creation method. I built it. First-pitch completion went from 39% to 52%, a 34% improvement in 8 weeks.
About Pitchkit
Pitchkit is a pitch deck builder I built for founders raising capital. It generates investor-ready decks, scores them against VC expectations, and offers four ways to create a pitch:
- AI-generated: answer a few questions, get a full deck
- Template-based: pick a proven structure and customize
- Import existing: upload a PDF or Google Slides link
- Blank canvas: start from scratch
Four options sounds like flexibility. In practice, it was causing a problem.
What Skene's PLG analysis found
I ran plg-skills analyze against the Pitchkit codebase. The command scanned the full repository (routes, components, user flows, analytics hooks) and produced a structured report of onboarding friction points.
The top finding: the creation page presented all four methods equally, with no guidance on which to pick.
For a first-time user landing on that page, the question wasn't "how do I build a pitch deck?" but "which of these four buttons do I click?" That hesitation is decision paralysis, and it was killing activation.
The analysis also flagged that Pitchkit had no onboarding personalization. Every user, regardless of stage, experience, or intent, saw the same creation page. A YC founder with a draft deck and a first-time founder with nothing written had identical experiences.
What plg-skills plan recommended
After the analysis, I ran plg-skills plan to get actionable recommendations. The plan recommended:
- A lightweight survey before the creation page to segment users by context (stage, existing materials, experience level)
- Deterministic routing: based on survey answers, recommend the best creation method instead of showing all four equally
- Keep it dismissible: power users who know what they want should be able to skip directly to their preferred method
The plan specifically recommended against heavy onboarding flows (multi-page wizards, mandatory tutorials, product tours) and instead focused on reducing time to value with minimal friction.
The problem in numbers
Before the change, Pitchkit's data showed:
- 61% of signups never completed their first pitch deck
- Median time to value was 14 minutes (signup to first completed pitch)
- Users who did complete a pitch mostly used AI generation, but many started with a template, got confused, and either switched or abandoned
The creation page had a bounce rate problem. Users were reaching it and then doing nothing.
The hypothesis
The issue wasn't motivation. These users had signed up, verified their email, and landed on the creation page. They wanted to build a pitch deck. They just didn't know which method was right for them.
If I ask a few questions first and route them to the right method, more users will complete their first pitch.
What I built
I built a 4-step onboarding survey based on the plan's recommendations:
Step 1: What's your goal? Fundraising, practice pitch, investor update, or internal deck.
Step 2: What do you have so far? Nothing yet, some notes, a draft deck, or a finished deck to improve.
Step 3: How familiar are you with pitch decks? First time, done it before, or experienced.
Step 4: What matters most right now? Speed, customization, or scoring feedback.
Based on the combination of answers, the survey recommends one of the four creation methods with a clear explanation of why. The logic is deterministic. No AI inference, no loading states. Answer the questions, get a recommendation instantly.
Key design decisions:
- Dismissible at any step: a "Skip" link is always visible
- No account data collected: the survey doesn't store answers in the database, it just routes the user
- Shows on first visit only: returning users go straight to the creation page
- Under 30 seconds: four taps and a recommendation
The results
I ran the survey for 8 weeks against a control group. Here's the data:
| Metric | Before | After | Change |
|---|---|---|---|
| First pitch completion rate | 39% | 52% | +34% |
| Median time to first pitch | 14 min | 8 min | -43% |
| AI generation usage | 44% | 51% | +16% |
| Template usage | 31% | 28% | -10% |
| Import usage | 15% | 14% | -7% |
| Blank canvas usage | 10% | 7% | -30% |
The overall completion rate went up, and the median time to first pitch nearly halved. Users were getting to their aha moment faster because they weren't spending time figuring out which method to use.
The shift toward AI generation makes sense. Most first-time users answered "nothing yet" for existing materials, and the survey correctly routed them to the fastest path.
Why it worked
Four things made this effective:
-
It reduced choices, not features. All four creation methods still exist. The survey just highlights the right one for each user. No functionality was removed.
-
It matched the user's context. A founder with a draft deck doesn't need AI generation, they need import. A first-timer with no materials doesn't need a blank canvas. The survey captures enough context to make a good recommendation.
-
It was fast and skippable. Under 30 seconds, dismissible at any step. Users who know what they want are never blocked.
-
It was deterministic. No loading spinners, no "AI is thinking" states. Tap an answer, see the next question, get a recommendation. The speed reinforces trust.
Unexpected insight
The survey data revealed an unexpected user segment: investors and advisors using Pitchkit to review or improve founders' decks. They weren't building their own pitches. They were importing existing decks to use the scoring feature.
This segment had always existed in the data, but without the survey question "What's your goal?", there was no way to identify them. They were getting routed to AI generation (the default recommendation for unclear intent) when they actually needed the import flow.
After the survey, import completion rates for this segment jumped significantly because they were finally being pointed to the right tool.
What Skene's PLG skills actually did
To be clear about what Skene's PLG skills did and didn't do here:
Skene did:
- Scan the Pitchkit codebase and identify decision paralysis as the top onboarding friction point
- Recommend a survey-based routing approach over heavier alternatives
- Flag the absence of onboarding personalization as a growth opportunity
I did:
- Write the survey questions (I know my users)
- Build the implementation
- Run the A/B test
Skene's value was in finding the right problem to solve and recommending the right shape of solution. I brought the domain knowledge and execution.
Try it on your own product
If your SaaS has a similar pattern (multiple paths, no routing, unclear first-run experience) Skene's PLG skills can find it.
Run plg-skills analyze to scan your codebase for onboarding friction, activation gaps, and growth opportunities. Then run plg-skills plan to get actionable recommendations.
The analysis takes minutes. The recommendations are specific to your codebase, not generic advice.