# Skene — full LLM corpus This file concatenates the canonical reference content from skene.ai into one plain-text document so large language models can retrieve, quote, and cite accurately. Pages are separated by ` --- ` boundaries, with each section opening with the source URL. Canonical site: https://www.skene.ai Index companion: /llms.txt Editorial standards: https://www.skene.ai/editorial Contact: support@skene.ai Last generated: build time (regenerated on every deploy) --- # About Skene Source: https://www.skene.ai/about Skene is building code-first product-led growth automation. The product reads a codebase and a linked Supabase project, generates a prioritised growth plan, and automates lifecycle emails and in-app notifications against real-time product events. Founding team: - Teemu Kinos, Co-founder. SaaS background. Building Skene because product-led growth should not require hiring an army. - Michele Boggia, Co-founder. PhD in physics, NLP expert. Owns how Skene understands and analyses code at scale. - Teppo Hudsson, Co-founder. Multi-product builder, deeply technical and product-focused. Thesis: traditional PLG was built for an era when hiring your way to growth was the only option. Teams building with Cursor, Claude Code, and similar AI-native tools do not have time for playbooks designed for 50-person companies. Skene gives AI-native builders the PLG infrastructure without a single human loop. --- # Contact Skene Source: https://www.skene.ai/contact Skene Technologies Mikonkatu 9 00170 Helsinki Finland Product support: support@skene.ai Open source & technical: https://github.com/SkeneTechnologies/skene/issues and https://github.com/SkeneTechnologies/skene/discussions Community: https://www.reddit.com/r/plgbuilders LinkedIn: https://www.linkedin.com/company/skeneai GitHub org: https://github.com/SkeneTechnologies Partnership, press, and responsible disclosure all route through support@skene.ai — tag the subject line so triage is fast. Reply SLA: one business day. --- # Editorial standards Source: https://www.skene.ai/editorial Who writes for Skene: everything published on skene.ai is produced by named humans on the Skene team or by credited external contributors. Historical posts attributed to "Skene" as an author are being migrated to named authors. Fact-checking: product claims reference the code, public docs, or the changelog. Third-party claims (about Pendo, Amplitude, Gainsight, Segment, Mixpanel, etc.) are anchored to each vendor's own public documentation or pricing page. Benchmarks cite methodology and sample size. AI assistance: Skene uses large language models to draft outlines, tighten copy, and generate JSON-LD or code snippets. A named human author reviews, edits, and signs off before publication. AI-generated code examples are executed against the real CLI or runtime before shipping. Fake quotes, fake case studies, and fake customer names are not produced. Corrections: errors reported to support@skene.ai are fixed inline and disclosed with a dated note. We do not stealth-edit comparison pages to downplay competitor strengths. Sponsorship: Skene does not accept paid placements, sponsored posts, or affiliate commissions. Third-party tool recommendations disclose any commercial relationship; the default assumption is none. AI crawler posture: robots.txt grants major AI crawlers (GPTBot, ClaudeBot, PerplexityBot, Google-Extended, and others) access to public content, and blocks authenticated workspace paths. Quoting Skene content in AI answers is welcome when linked to the canonical URL. --- # Product-led growth (PLG) reference hub Source: https://www.skene.ai/product-led-growth A system-level reference for product-led growth (PLG): how it works, when it makes sense, and how the other PLG guides on this site fit together. Who this is for: Founders, PMs, and growth leads who want a shared, system-level understanding of PLG. When to use this: - You need a clear definition of PLG that goes beyond “free trial + self-serve signup”. - You want to align leadership and teams on what “doing PLG” actually means. - You are deciding whether PLG is the right motion for your product or market. Key questions: - What is product-led growth and how does it differ from sales-led or marketing-led motions? - How does a PLG system work end-to-end, from acquisition to expansion? - When does PLG make sense for a product, and when is it a poor fit? - What components do we need in place to run PLG as an operating system, not a campaign? ## What is product-led growth? Product-led growth (PLG) is a go-to-market strategy where the product experience itself is the primary driver of acquisition, activation, expansion, and retention. Instead of relying mainly on demos, long sales cycles, or campaigns, PLG teams design onboarding, in-product guidance, and workflows so that users can discover value directly inside the product and qualify themselves through their behavior. ## The PLG system in one picture You can think of PLG as a looped system rather than a one-way funnel. New users discover the product, sign up, move through onboarding, hit an activation milestone, form habits through retention loops, and then expand usage or revenue over time. At each of these stages, the product, data, and commercial model work together: events and instrumentation show where users succeed or get stuck, and teams use those signals to refine onboarding, product surfaces, and pricing. Changes flow back into the system, which in turn shapes the next cohort of users. ## Core components of a PLG system Product surface: the actual experience users touch, including signup flows, onboarding journeys, in-product prompts, empty states, templates, and everyday workflows. Data and instrumentation: event tracking, funnels, cohorts, and retention views that show how real users move through the product and where they drop off or return. Growth mechanics: the loops that compound usage, such as collaboration features, invites, sharing, referrals, and reminders connected to real work. Commercial model: pricing and packaging that align with usage and value, such as free tiers, trials, usage-based or seat-based plans, and clear upgrade paths that fit how customers adopt the product. ## When PLG makes sense and when it does not PLG tends to work best when individual users or small teams can start using the product themselves, reach a meaningful outcome quickly, and expand usage without needing heavy implementation projects. It is a weaker fit when the product is purchased rarely, has very high stakes per decision, or depends on long procurement and security processes before anyone can use it. In those cases, PLG can still play a role (for example with limited sandboxes or tools for champions), but it is rarely the whole motion. ## How to use this PLG reference hub Start here to align on basic concepts and system behavior, then dive deeper into the dedicated pages: the definition of PLG, how AI-native products change PLG, how onboarding and activation relate, how retention loops work, common failure modes, and which metrics matter. You can treat this hub as the “map of the territory” and use the other pages as reference chapters when you are working on specific problems, such as redesigning onboarding or debugging weak retention. --- # What is product-led growth? Source: https://www.skene.ai/product-led-growth/what-is-plg A canonical definition of product-led growth (PLG), how it compares to sales- and marketing-led models, and what it means for product, data, and go-to-market teams. Who this is for: Leaders and ICs who need a precise definition of PLG and a shared mental model for operating it. When to use this: - You are deciding whether to adopt PLG or add a PLG motion alongside an existing sales-led motion. - You need a concise explanation of PLG for leadership, the board, or new teammates. - You want to understand how PLG affects roles across product, growth, marketing, and sales. Key questions: - What exactly is product-led growth, beyond the buzzword? - How does PLG differ from sales-led and marketing-led growth along key dimensions? - How does the PLG value chain work from acquisition through expansion? - What changes in team structure and responsibilities when you move toward PLG? - What misconceptions about PLG should you avoid? ## Definition and core idea Product-led growth (PLG) is a strategy where the primary way customers discover, evaluate, and expand their use of a product is through the product itself. Users can sign up, reach value, and grow usage with minimal friction and without needing a long human-driven sales process. In a mature PLG motion, product usage is both the main engine of growth and the main source of truth. Activation, retention, and expansion are modeled as measurable behaviors, and teams iterate on those behaviors through product changes and experiments. ## How PLG differs from sales-led and marketing-led growth In sales-led models, humans do most of the work before users touch the product: discovery calls, demos, proof-of-concept projects, and negotiation. Marketing-led models rely heavily on content and campaigns to create demand and qualify leads before a product experience is involved. In PLG, users experience value early and often. They self-educate inside the product, qualify themselves through usage, and often bring the product into their team before a salesperson ever joins the conversation. Sales and marketing still matter, but they build on top of strong product usage instead of compensating for a weak product experience. ## The PLG value chain A useful way to think about PLG is as a value chain: acquisition → onboarding → activation → retention → expansion. Each stage has its own question: who is arriving, how quickly do they reach value, how often do they return, and how does usage grow into revenue? Strong PLG motions treat each stage as an area of continuous design and experimentation. Rather than treating onboarding, retention, and pricing as one-off projects, teams revisit each link in the chain regularly using shared metrics and qualitative feedback. ## PLG as a system for founders and PMs PLG is more than a set of tactics like adding a free trial or in-product checklist. It is a system where product, data, and go-to-market are tightly coupled. Product changes affect activation and retention; those in turn drive which accounts become qualified for sales outreach or expansion offers. Operating that system well requires reliable instrumentation, a culture of experiment design and review, and shared ownership across product, growth, marketing, and sales. Decisions are grounded in behavioral data, not only in intuition or anecdote. ## Org and role implications For founders and leadership, PLG usually means investing earlier in product, data, and self-serve infrastructure rather than primarily in headcount for outbound or field sales. For PMs and growth leads, PLG expands the scope of ownership upstream and downstream: signup flows, onboarding journeys, activation definitions, and sometimes the handoff to sales and customer success. ## Common misconceptions about PLG “PLG means no sales team” is inaccurate. In practice, many successful PLG companies run a hybrid motion where product usage creates high-intent leads and sales focuses on expansion and complex deals. “PLG is just a free trial or freemium plan” is also wrong. Free trials and freemium plans are distribution choices. Without a clear activation definition, strong onboarding, and a retention model, they rarely produce durable growth by themselves. --- # AI vs traditional product-led growth Source: https://www.skene.ai/product-led-growth/ai-vs-traditional-plg How AI-native products change activation, retention, and experimentation patterns in product-led growth. Who this is for: Teams building AI-native products or adding AI features to existing PLG products. When to use this: - You are applying a traditional PLG playbook to an AI product and the results feel off. - You need to understand how AI-specific risks and signals affect PLG metrics. - You are designing onboarding or activation for an AI assistant, agent, or model-powered workflow. Key questions: - How does AI change activation and retention patterns in PLG? - What new mechanics and risks appear when the core product is model-driven? - How should we adapt onboarding, experimentation, and metrics for AI products? ## Why AI changes PLG dynamics AI products are probabilistic: the same input can produce different outputs depending on context, data, and model state. That makes perceived value more variable and user trust more fragile than in purely deterministic software. At the same time, AI systems can adapt interfaces, recommendations, and workflows based on observed behavior, which creates new opportunities for personalized activation and retention loops that did not exist in static products. ## Traditional PLG vs AI-native PLG In traditional PLG, onboarding often focuses on helping users discover features and configure settings. In AI PLG, onboarding must also help users build good mental models of what the system can and cannot do, and how to interact with it effectively. Experimentation and risk look different too: small changes to prompts, guardrails, or model configuration can have outsized effects on perceived quality, safety, and trust. You need to watch not just conversion metrics but also signals of user confidence and error handling. ## New activation and retention patterns in AI products Activation in AI products is often tied to the first time a user gets a “good enough” output that they actually use in their real work: a draft they edit and send, a workflow that runs end-to-end, or a recommendation they follow. Retention depends on whether the AI system becomes embedded in repeatable workflows such as projects, documents, agents, or automations that users return to, not just on whether it can produce impressive one-off demos. ## AI-specific PLG mechanics Inline suggestions, completion, and automation can dramatically reduce time-to-value when they are well-tuned, but they can also overwhelm or confuse users if surfaced too early or in the wrong context. Feedback channels such as ratings, corrections, or “compare to previous” views become part of the PLG system: they inform model and UX improvements and give users a sense of control over a probabilistic system. ## Risks and failure modes in AI PLG Overpromising what the AI can do or failing to clearly communicate its limits can erode trust quickly and show up as sudden drops in activation or retention after initial curiosity fades. Onboarding flows that assume users will craft perfect prompts or understand advanced configuration from day one often backfire; users churn before they ever see the product at its best. ## Implications for metrics and experimentation AI products still need traditional PLG metrics such as activation, retention, and expansion, but must also add quality and trust signals: acceptance rates, correction rates, error reports, and qualitative feedback. Experiments should be designed to monitor both business outcomes and these quality signals. A change that boosts short-term engagement but quietly increases frustration or errors can damage the motion over time. --- # Onboarding vs activation Source: https://www.skene.ai/product-led-growth/onboarding-vs-activation How to distinguish onboarding from activation, define the right activation events, and design flows that reliably lead new users to value. Who this is for: PMs and growth leads responsible for new-user journeys and early product experience. When to use this: - You are redesigning onboarding and want to avoid treating it as a tour of UI surfaces. - You need a crisp activation definition that correlates with long-term value. - You are debugging why strong signup volume is not turning into active accounts. Key questions: - What is the difference between onboarding and activation in PLG? - How do we define a good activation event for our product? - How should we design and measure onboarding flows that drive activation? ## Definitions: onboarding vs activation Onboarding is the guided path a new user takes from signup to their first meaningful experience. It includes what they see, what they are asked to do, and how the product responds along the way. Activation is the milestone where a user first reaches a state that strongly predicts long-term success, such as completing a core workflow, connecting key data, or inviting collaborators. Onboarding is the path; activation is the destination. ## The Aha moment and time-to-value The “Aha moment” is when the product concept clicks for a user; activation is when they actually complete the behavior that creates durable value. In some products these are the same moment, but not always. Time-to-value measures how long it takes users to reach activation from signup. Even modest reductions here can have outsized effects on conversion and retention, especially in self-serve motions. ## Designing onboarding flows Strong onboarding flows start from a clear definition of activation and work backwards. They prioritize the minimum set of steps needed to reach that milestone for a given role or use case. Patterns like progressive disclosure, checklists, guided templates, and sample data can help, but only when they are tied directly to real jobs-to-be-done rather than to feature tours. ## Defining activation events and success criteria A good activation event is observable in your data, occurs relatively early in the journey, and has a strong relationship with long-term retention or revenue at the user or account level. You can often find candidate events by looking at what your best customers did in their first days or weeks and identifying the common milestones they reached before becoming long-term users. ## Measuring onboarding and activation At a minimum, track how many users start onboarding, how many reach each major step, and how many reach activation, along with the time it takes. Segment these numbers by persona, plan, or acquisition channel. Use this visibility to focus improvements: sometimes the biggest gains come from simplifying a single form or removing one unnecessary configuration step between signup and activation. ## Common pitfalls Overly long or generic onboarding tours that walk through every feature can slow users down and distract from the outcome that actually matters. In AI products, expecting users to provide perfect prompts or complex configuration up front is a frequent mistake. Safer sandboxes, opinionated defaults, and guided examples usually work better. --- # Retention loops Source: https://www.skene.ai/product-led-growth/retention-loops Retention loops are self-reinforcing cycles that bring users back to your product repeatedly. Learn how to design, measure, and optimize retention loops that drive long-term engagement and expansion in product-led growth. Who this is for: PMs, growth leads, and customer teams focused on engagement, retention, and expansion. When to use this: - You see reasonable activation, but users are not coming back or growing usage. - You are designing features intended to build habits, collaboration, or viral spread. - You want to understand retention cohort charts in terms of real product behavior. Key questions: - What is a retention loop and how is it different from a funnel? - Which types of loops are most relevant in PLG products? - How do we design, measure, and debug retention loops in practice? ## From funnels to loops Funnels are useful for understanding how users move through a finite set of steps, such as signup or a one-time purchase. Retention, however, depends on what happens after those steps: whether users return, how often, and what they do. Thinking in loops means focusing on repeatable cycles: trigger → action → reward → investment. Each loop describes how a user comes back, gets value, and does something that makes future returns more likely or valuable. ## Types of retention loops Habit loops: recurring triggers (notifications, work tasks, schedules) that bring users back, paired with clear rewards and small investments that make the product more valuable over time. Data loops: more usage produces more data or configuration, which improves results for the same user or account. AI products often rely on these loops as models adapt to specific data and behavior. Collaboration loops: inviting teammates or stakeholders increases the value of the product for everyone involved, making it harder for the account to churn. ## Designing retention loops Start by identifying the core repeat action that defines healthy usage in your product. For example, shipping changes, reviewing analytics, or triggering automations. Then design triggers, rewards, and investments around that action: what brings users back at the right moment, what value they get, and what they do that makes the next visit easier or more rewarding. ## Measuring retention Use cohort-based retention curves to see how groups of users who started in the same time window behave over weeks or months. Look for whether newer cohorts are trending up or down compared to earlier ones. Complement aggregate curves with feature-level and segment-level views so you can link retention patterns to specific loops, roles, or use cases rather than treating retention as a single number. ## Diagnosing retention problems Common patterns include strong early usage followed by steep drop-offs, “tourist” users who activate but never embed the product in their workflows, and heavy unpaid usage that never turns into revenue. When you see these patterns, trace them back to loop design: are there strong triggers and rewards, and do users have clear reasons to invest in the product so that it becomes harder to abandon? --- # Common PLG failures Source: https://www.skene.ai/product-led-growth/common-failures Typical ways product-led growth efforts go wrong, and a structured way to debug a struggling PLG motion. Who this is for: Teams who have "tried PLG" and are not seeing the expected results. When to use this: - You have launched PLG initiatives, but activation, retention, or revenue are underwhelming. - You suspect a mismatch between your product and a pure PLG motion. - You need a checklist to systematically debug where your PLG motion is breaking. Key questions: - Why do PLG strategies fail in practice? - Which failures are strategic fit issues vs execution issues? - How can we debug a struggling PLG motion without guessing? ## Why PLG efforts fail The most common root cause is a mismatch between the product and the chosen motion: trying to force a fully self-serve PLG play on products that truly require high-touch implementation or complex procurement. The second is under-investing in the infrastructure PLG needs, such as instrumentation, experimentation, and self-serve onboarding, while still expecting PLG-like outcomes. ## Strategy failures Declaring a PLG strategy without clarifying which segments or use cases it applies to leads to confusion and misaligned expectations across teams. Running half-sales-led, half-PLG motions without clear rules of engagement for who owns which accounts, what counts as a qualified signal, and when sales should engage creates internal friction and a poor experience for customers. ## Execution failures Over-focusing on top-of-funnel signups at the expense of activation and retention leads to dashboards full of new accounts that never adopt the product meaningfully. Treating onboarding or pricing changes as one-off projects instead of as experiments makes it hard to learn systematically and often results in regressions that no one notices until much later. ## AI-specific failures Overselling AI capabilities in marketing or onboarding, then delivering inconsistent or low-quality outputs, can quickly destroy user trust and generate negative word-of-mouth. Designing flows that expect users to behave like expert prompt engineers from day one often results in early frustration and abandonment before users ever see reliable value. ## Metrics and decision-making failures Relying on vanity metrics such as total signups or page views, without connecting them to activation, retention, or revenue, makes it easy to declare success while the real motion is weak. Ignoring cohorts, segments, and qualitative insight when reading experiments can cause teams to scale up changes that work for a narrow group while hurting the broader base. ## A checklist for debugging PLG Start by checking strategic fit: can your ideal customers reasonably self-serve the first steps, or do they actually need heavy implementation or approvals? Then move stage by stage: is activation clearly defined and instrumented, are retention loops present and measured, and does your pricing and packaging align with how customers adopt and grow? --- # PLG metrics Source: https://www.skene.ai/product-led-growth/plg-metrics A reference for measuring product-led growth: key metrics across the PLG funnel and loops, including AI-specific quality and trust signals. Who this is for: Founders, PMs, and growth leads designing dashboards and decision frameworks for PLG. When to use this: - You are setting up or revising your PLG metrics and dashboards. - You want to connect product behavior to acquisition, retention, and revenue outcomes. - You are adding AI features and need to understand how quality and trust metrics fit into PLG. Key questions: - How is PLG measurement different from traditional sales pipeline reporting? - Which metrics matter most at each stage of the PLG value chain? - What AI-specific metrics should we track alongside traditional PLG metrics? - How can we use these metrics to prioritize experiments and investments? ## Why PLG needs its own metric model Traditional sales metrics focus on opportunity stages and deal values. PLG metrics focus on user and account behavior over time: what people do in the product, how often, and with what outcomes. Because the product experience drives growth in PLG, you need metrics that describe that experience directly instead of only tracking what happens in CRM stages. ## The PLG funnel and loops in metrics form At the funnel level, you can map acquisition, onboarding, activation, retention, and expansion to concrete metrics such as signup-to-activation rate, time-to-activation, and expansion revenue. At the loop level, you track recurring behaviors such as weekly active usage, feature adoption, collaboration events, and upgrade triggers to see whether value compounding is actually happening. ## Core PLG metrics by stage Acquisition: total signups, visitor-to-signup conversion, and volume of product-qualified leads (PQLs) by segment or channel. Activation: activation rate at the user and account level, time-to-activation, and activation broken down by persona, plan, or acquisition source. Retention: N-day retention, cohort-based retention, DAU/WAU/MAU ratios, and stickiness (for example DAU/MAU). Monetization and expansion: free-to-paid conversion, ARPU or ARPA, expansion revenue, and net revenue retention (NRR). ## AI-specific and quality metrics For AI products, you also need metrics that describe model quality and user trust: acceptance rates for AI outputs, correction or override rates, and the share of outputs that lead to successful downstream actions. Qualitative signals such as user feedback, satisfaction ratings, and support tickets about AI behavior should be tracked alongside quantitative metrics, not treated as anecdotes. ## Minimum viable metric stack by stage In early stages, focus on a minimal set: activation rate, time-to-activation, a small number of retention points (such as D7 and D30), and a qualitative understanding of why users stay or leave. As you scale, add more segmentation, cohort analysis, and unit economics so you can see which segments and use cases are driving sustainable growth and which are burning resources. ## Instrumentation principles Define a small, stable set of core events that map to your journeys and milestones, and keep naming conventions consistent so that analyses remain interpretable over time. Treat data quality as part of the product: missing or inconsistent events are PLG bugs because they prevent you from seeing and improving how the system behaves. ## Using metrics to guide decisions Use metrics to identify the current bottleneck (acquisition, activation, retention, or expansion) before choosing tactics. Optimizing a stage that is not the constraint rarely moves the business. Design experiments with explicit primary and secondary metrics, plus clear time windows and cohort definitions, so you can distinguish real effects from noise and avoid overfitting to short-term changes. --- # Product-led growth (PLG) Source: https://www.skene.ai/resources/glossary/product-led-growth Category: Foundations Also known as: PLG, product led growth, product-led, PLG motion A go-to-market strategy where the product experience itself drives acquisition, activation, expansion, and retention. Product-led growth (PLG) is a go-to-market motion where the product experience itself drives acquisition, activation, expansion, and retention. Instead of relying primarily on outbound sales or marketing, teams design journeys, milestones, and value moments that users can discover and complete inside the product with minimal friction. Strong PLG motions still use sales and success, but those humans focus on accounts that already show clear product usage signals rather than creating intent from scratch. ## Definition Product-led growth (PLG) is a strategy where the product experience is the primary driver of growth. Users can discover, try, and expand usage with minimal friction and without needing heavy sales involvement. ## Why it matters PLG can reduce acquisition costs, shorten sales cycles, and create more aligned product and go-to-market teams. It works best when users can reach value quickly on their own. ## Examples of PLG in practice Developer tools, analytics products, and collaboration software often use PLG: individuals or small teams can start for free, reach value quickly, and then expand usage across their organization. Common patterns include free tiers, generous trials, and in-product prompts that encourage inviting teammates or connecting data sources. ## How Skene supports product-led growth Skene turns your codebase into PLG infrastructure by generating onboarding journeys, milestones, and analytics that evolve as you ship. Instead of treating PLG as a separate project, Skene keeps your product experience, journeys, and metrics aligned as the product changes. ## Implementation notes - Start by defining one or two activation milestones and instrumenting them end-to-end instead of trying to "be PLG" everywhere at once. - Use product signals (completed onboardings, repeat use of value-driving workflows, team invites) to decide when humans should reach out. - PLG is not just about removing sales—it is about making the product the best salesperson and letting humans focus on high-value interactions. ## FAQ Q: Is product-led growth only for B2C products? A: No. PLG works well for B2B products too, especially those with individual end users (developers, designers, analysts) who can try the product before involving procurement. Many successful B2B companies like Slack, Figma, and Datadog use PLG. Q: Can PLG work with a sales team? A: Yes. Most mature PLG companies have sales teams, but sales focuses on accounts that show strong product usage signals (PQLs) rather than cold outreach. This is sometimes called "product-led sales" or a hybrid GTM motion. Q: What is the difference between PLG and freemium? A: Freemium is a pricing model (a free tier with paid upgrades). PLG is a go-to-market strategy where the product drives growth. You can have PLG without freemium (e.g., a free trial model) and freemium without PLG (if the free tier does not lead to organic growth). --- # Activation Source: https://www.skene.ai/resources/glossary/activation Category: Metrics Also known as: User activation, Product activation, Activation event, Activation milestone The moment when a new user reaches a key milestone that strongly correlates with long-term retention or value. Activation is the pivotal moment in a user journey where they complete a key action or sequence of actions that strongly correlates with long-term success. Unlike vanity metrics such as signups or logins, activation measures whether users have actually experienced enough value to stick around. In product-led growth, activation is typically the first major conversion event after signup and serves as the bridge between acquisition and retention. Getting activation right—defining it clearly, measuring it consistently, and optimizing the path to reach it—is one of the highest-leverage activities for any PLG team. ## Definition Activation is the point in a user journey where they complete a key action or sequence of actions that strongly correlates with long-term success. It represents the moment when a user transitions from "trying" to "using" your product in a meaningful way. ## Examples of activation Examples include: sending a first message, connecting a data source, creating a first project, or deploying a first integration. For Slack, activation might be when a team sends 2,000 messages. For Dropbox, it could be saving a file to a synced folder. For a developer tool, it might be completing a first successful API call. ## How to define activation for your product A good activation definition should be observable in your data, happen early in the journey, and strongly predict long-term retention or revenue. You can start by looking at what your best customers did in their first days or weeks and identify the common milestones they reached. Run a correlation analysis between early behaviors and 30/60/90-day retention to find which actions most strongly predict success. ## Activation vs aha moment The aha moment is when a user emotionally "gets" why your product matters to them. Activation is the measurable proxy for that moment. They are related but distinct: you cannot directly measure an emotional realization, so you define an activation event that correlates with it. ## Activation and Skene Skene uses milestones inferred from your codebase to suggest potential activation points, then tracks completion and time-to-value for those milestones. This makes it easier to refine your activation definition over time and see how changes to onboarding affect activation rates. ## Implementation notes - Start with a single, clear activation definition rather than tracking multiple competing versions. - Your activation event should be achievable within the first session or first few days—if it takes weeks, it is probably too complex. - Pair activation tracking with time-to-activation to understand not just whether users activate, but how quickly. ## FAQ Q: How do I find the right activation metric for my product? A: Analyze your retained users and identify the common actions they took early in their journey. Run correlation analysis between early behaviors (first 7–14 days) and long-term retention (60–90 days). The action with the strongest correlation is likely your activation event. Q: Can I have multiple activation events? A: You can track multiple candidate events during discovery, but operationally you should align your team around one primary activation definition. Multiple definitions create confusion and make it harder to optimize. Q: What is a good activation rate benchmark? A: Activation rates vary widely by product type. B2B SaaS products often see 20–40% activation rates, while consumer products may range from 10–30%. The key is to measure your baseline and improve it over time rather than chasing a universal benchmark. --- # Time-to-value (TTV) Source: https://www.skene.ai/resources/glossary/time-to-value-ttv Category: Metrics Also known as: TTV, Time to value, Time to first value, TTFV The time it takes for a new user or account to experience their first meaningful outcome or "aha moment". Time-to-value (TTV) measures how long it takes for a new user or account to experience a meaningful outcome from your product. In product-led growth, TTV is a critical leading indicator because users who reach value quickly are far more likely to convert, retain, and expand. A long TTV often signals onboarding friction, unclear product positioning, or a mismatch between user expectations and the actual product experience. Reducing TTV is one of the most effective ways to improve activation rates and trial-to-paid conversion. ## Definition Time-to-value (TTV) measures how long it takes for a new user or account to experience a meaningful outcome from your product. It is typically measured as the elapsed time between signup (or first login) and a defined activation event. ## Why it matters Shorter time-to-value is associated with better conversion, higher activation rates, and lower churn, especially in self-serve PLG funnels. Users have limited patience—if they do not see value quickly, they abandon the product before ever reaching activation. TTV also affects word-of-mouth: users who reach value fast are more likely to recommend your product to others. ## Types of time-to-value Time-to-first-value: The time until a user experiences any value, even a small win. Time-to-activation: The time until a user reaches your defined activation milestone. Time-to-habit: The time until a user establishes a regular usage pattern (often measured at Day 7 or Day 30). ## How to measure time-to-value Choose a clear activation event and measure the elapsed time from signup or first key action until that event occurs. You can then compare time-to-value across acquisition channels, roles, or plan types to see where users move faster or slower. Track both median TTV and the distribution—a few outliers with very long TTV can indicate specific personas or use cases that need targeted help. ## How to reduce time-to-value Streamline signup by reducing required fields and deferring non-essential data collection. Provide opinionated defaults and pre-built templates so users can see value before heavy configuration. Use progressive onboarding to guide users to their first win without overwhelming them with features. ## Time-to-value and Skene Skene automatically measures time-to-value for the milestones and journeys it generates, so you can see how fast different segments reach activation. Because these metrics are tied to your actual product journeys, you can directly see how changes in onboarding content impact time-to-value. ## Implementation notes - Measure TTV from the user's perspective, not yours—start the clock when they take their first intentional action, not when they verify their email. - Segment TTV by use case and persona to identify which groups need the most onboarding help. - Set a target TTV (e.g., under 10 minutes for a first win) and design your onboarding around achieving it. ## FAQ Q: What is a good time-to-value benchmark? A: It depends on product complexity. Simple tools should aim for value within the first session (minutes). Complex B2B products might target value within the first day or week. The key is to measure your baseline and systematically reduce it. Q: How is time-to-value different from activation rate? A: Activation rate measures the percentage of users who reach activation. Time-to-value measures how long it takes those who do activate. You can have a high activation rate but slow TTV, or fast TTV but low activation rate—both need optimization. Q: Should I measure time-to-value in hours or days? A: Use the unit that makes sense for your product. Consumer and simple SaaS products often measure in minutes or hours. Complex B2B products with implementation requirements may measure in days or weeks. --- # Product-qualified lead (PQL) Source: https://www.skene.ai/resources/glossary/product-qualified-lead-pql Category: Funnels Also known as: PQL, Product qualified lead, Usage-qualified lead A lead that has reached a usage threshold or milestone in the product that indicates high buying intent. A product-qualified lead (PQL) is a lead whose in-product behavior indicates high buying intent, such as reaching a usage threshold or completing a key workflow. Unlike marketing-qualified leads (MQLs) based on content engagement or sales-qualified leads (SQLs) based on sales conversations, PQLs are identified by what users actually do in your product. In a PLG motion, PQLs are the primary handoff point from product-led acquisition to sales-assisted conversion, making them a critical bridge between self-serve growth and human-assisted expansion. Defining, measuring, and operationalizing PQLs is one of the most impactful things a PLG company can do to align product, marketing, and sales teams around a shared understanding of what a qualified buyer looks like. Companies that implement PQL models typically see higher conversion rates, shorter sales cycles, and better alignment between the product experience and revenue goals. ## Definition A product-qualified lead (PQL) is a lead whose in-product behavior indicates high buying intent, such as reaching a usage threshold or completing a key workflow. PQLs represent users or accounts that have demonstrated value realization through their actions, not just interest through form fills or content downloads. ## PQL vs MQL vs SQL MQLs (marketing-qualified leads) are based on marketing engagement: downloading whitepapers, attending webinars, or visiting pricing pages. They indicate interest but not necessarily product understanding or buying readiness. SQLs (sales-qualified leads) are based on sales judgment after discovery calls or demos. They are further down the funnel but rely on subjective human assessment rather than usage data. PQLs are based on actual product usage data, making them the most reliable indicator of intent in a PLG model. A user who has integrated your API into their production environment is a stronger signal than one who downloaded a whitepaper. The key difference is signal quality. MQLs tell you someone is researching a problem. SQLs tell you someone is open to a conversation. PQLs tell you someone has already experienced value and is likely ready to pay for more. Many PLG companies use a combined approach: MQLs feed the top of the funnel, PQLs identify the best conversion opportunities, and SQLs confirm readiness for enterprise deals. ## How to define PQL criteria Start by analyzing your existing converted customers. Look at the actions they took in the product before they converted to a paid plan. Common patterns include reaching usage limits, inviting team members, or using specific high-value features. Separate your users into two groups: those who converted and those who did not. Identify the behaviors that appear significantly more often in the converted group. These are your candidate PQL signals. Validate your criteria by testing them against historical data. If your proposed PQL definition would have correctly identified 70-80% of your past conversions while flagging fewer than 30% of total users, you have a useful model. Consider both positive signals (actions that indicate buying intent) and negative signals (actions that indicate the user is not a fit). A user who hits API rate limits but has a free email domain may not be a good PQL. Review and update your PQL criteria quarterly. As your product evolves and your customer base changes, the behaviors that predict conversion will shift. ## PQL scoring models A binary PQL model uses a simple threshold: if a user does X, they are a PQL. This is the simplest approach and works well for products with a clear activation event. For example, "any user who invites 3+ teammates is a PQL." A weighted scoring model assigns points to different behaviors and triggers PQL status when the score crosses a threshold. This allows you to combine multiple weaker signals into a strong composite signal. For example, inviting a teammate (10 points) + creating a project (5 points) + returning 3 days in a row (15 points) = PQL at 25 points. A predictive model uses machine learning to identify which combination of behaviors best predicts conversion. This is the most sophisticated approach but requires enough historical conversion data to train a reliable model—typically at least a few hundred conversions. Most teams should start with a binary model, graduate to weighted scoring as they learn which signals matter most, and consider predictive models only when they have sufficient data and engineering resources. ## Examples of PQL signals by product type Collaboration tools: Adding 5+ team members, creating shared workspaces, or integrating with other tools the team already uses. These signals indicate the product is becoming embedded in team workflows. Analytics products: Creating multiple dashboards, setting up scheduled reports, or connecting production data sources. These actions show the user is moving beyond evaluation into real usage. Developer tools: Making production API calls, exceeding free-tier rate limits, or deploying to production environments. These are strong signals that the tool has become part of the development workflow. Design tools: Sharing designs with external stakeholders, creating a team library, or exporting assets for production use. These indicate the tool has moved from personal experimentation to professional use. Communication platforms: Daily active usage across multiple channels, integrating with workflow tools, or having a high percentage of the team active. These show the platform has become the default communication channel. Negative signals matter too—accounts that reach PQL thresholds but then go dormant may need nurturing rather than sales outreach. ## Building a PQL model A useful PQL model combines quantitative usage thresholds (such as number of projects, events, or seats) with qualitative fit data such as role or company size. Over time you can refine your PQL model by comparing which behaviors show up most often in accounts that convert or expand. Start simple—a single behavior threshold—and add complexity only when you have data to justify it. ## Operationalizing PQLs Route PQL alerts to sales in real-time so they can reach out while the user is actively engaged. Provide sales with context: what the user has done, what plan limits they are approaching, and what their company profile looks like. Track PQL-to-opportunity and PQL-to-close rates to validate and refine your PQL definition. Create a feedback loop between sales and product. When sales reaches out to PQLs and they convert (or do not), record the outcome and use it to refine your PQL criteria over time. ## Examples - PQL model for a project management SaaS: A project management tool defines a PQL as any workspace with 5+ active users that has created at least 3 projects and integrated with one external tool (Slack, GitHub, or Jira). Users meeting this criteria convert to paid at 32%, compared to 4% for the overall free user base. The sales team receives a Slack notification with workspace details when an account qualifies. - Weighted PQL scoring for an analytics platform: An analytics product assigns points to key behaviors: connecting a data source (10 pts), creating a dashboard (5 pts), sharing a dashboard with a teammate (15 pts), setting up a scheduled report (20 pts), and exceeding the free-tier event limit (25 pts). Accounts that reach 40 points are flagged as PQLs. This composite approach captures accounts that show buying intent through different paths rather than requiring one specific action. - PQL with firmographic enrichment for a developer tool: A CI/CD platform combines usage signals (10+ builds per week, production deployments) with firmographic data from Clearbit (company size 50+, Series A or later). Usage alone generates too many false positives from hobbyist developers. The combined model reduces noise by 60% while maintaining the same conversion rate, allowing the small sales team to focus on accounts with both product engagement and enterprise potential. ## Implementation notes - Define PQL criteria based on historical conversion data—look at what converted accounts did before buying. - Avoid setting PQL thresholds too low (creating noise) or too high (missing opportunities). - Combine usage signals with firmographic data (company size, industry) for better lead scoring. - Instrument PQL events in your product analytics and pipe them to your CRM so sales can act on them without switching tools. - Track the time between PQL qualification and first sales touch. Every hour of delay reduces conversion probability. ## FAQ Q: How do I identify the right PQL signals for my product? A: Analyze your converted customers and identify the common behaviors they exhibited before converting. Look for actions that indicate value realization (completing key workflows) or expansion intent (inviting teammates, hitting limits). Validate by comparing conversion rates of users who exhibit these behaviors vs. those who do not. Q: Should PQLs replace MQLs entirely? A: Not necessarily. In a hybrid GTM model, both can coexist. MQLs may still be valuable for users who cannot self-serve (enterprise accounts needing custom contracts) or for products where the free tier does not fully demonstrate value. Many PLG companies use PQLs as the primary signal and MQLs as a supplement. Q: How quickly should sales follow up on a PQL? A: Speed matters. Reach out within hours, not days. PQLs represent active users who are engaged right now—waiting too long means they may have moved on, lost momentum, or found an alternative. Automate PQL alerts to sales and provide them with enough context to have a relevant conversation immediately. Q: What conversion rate should I expect from PQLs? A: PQL-to-paid conversion rates vary significantly by product and market, but well-defined PQLs typically convert at 15-30%, compared to 1-5% for general free users. If your PQL conversion rate is below 10%, your criteria may be too loose. If it is above 40%, you may be setting the bar too high and missing earlier-stage opportunities. Q: How many PQL criteria should I have? A: Start with one or two strong signals that clearly differentiate converters from non-converters. As you gather more data, you can add criteria to build a composite score. Avoid having more than 5-7 criteria in a weighted model—too many variables make the model hard to understand, maintain, and explain to the sales team. Q: Can PQLs work for enterprise sales? A: Yes, and they are often even more valuable in enterprise contexts. Enterprise PQLs typically look at account-level behavior (multiple users active, cross-department usage, integration with enterprise tools) rather than individual user actions. The key difference is that enterprise PQL models should include firmographic fit criteria alongside usage data to ensure the account matches your ideal customer profile. --- # Onboarding journey Source: https://www.skene.ai/resources/glossary/onboarding-journey Category: Onboarding Also known as: User onboarding, Onboarding flow, Onboarding experience, New user journey A guided sequence of steps that helps new users reach activation, often personalized by role, use case, or plan. An onboarding journey is the structured path a new user follows to reach activation in your product. It usually combines in-product prompts, checklists, and contextual help to guide users through critical setup and first-use steps. In product-led growth, the onboarding journey is not just a feature tour—it is the critical bridge between signup and value realization. A well-designed journey reduces time-to-value, increases activation rates, and sets the foundation for long-term retention by helping users form the right habits from day one. The best onboarding journeys feel invisible to the user: they guide without being intrusive, educate without overwhelming, and celebrate progress without being patronizing. Building an effective onboarding journey requires understanding your users deeply, mapping the steps between signup and value, and continuously iterating based on data and user feedback. ## Definition An onboarding journey is the structured path a new user follows to reach activation in your product. It usually combines in-product prompts, checklists, and contextual help to guide users through critical setup and first-use steps. Unlike a product tour that shows features, an onboarding journey is outcome-oriented: it is designed to get the user to a specific valuable result as quickly as possible. ## Mapping the onboarding journey Start by identifying the activation event—the action or milestone that correlates most strongly with long-term retention. Your entire onboarding journey should lead users toward this event. Work backward from the activation event to identify every step a user must complete to get there. This might include account setup, data import, configuration, and first use of a core feature. For each step, note the prerequisites (what must happen before this step), the expected time to complete, common failure points, and what help resources are available. Map alternative paths for different user types. A technical user importing data via API has a different journey than a non-technical user uploading a CSV, even if both end at the same activation event. Document the journey visually—a flowchart or journey map that the entire team can reference ensures everyone is aligned on what the ideal user path looks like. ## Onboarding journey stages Stage 1 - Welcome and context collection: The first interaction after signup. Collect minimal information (role, use case, team size) to personalize the journey. Keep this to 1-2 screens maximum. Stage 2 - Core setup: The essential configuration steps that must happen before the user can experience value. This might include connecting a data source, creating a workspace, or importing existing data. Aim to minimize required setup by using smart defaults. Stage 3 - First value moment: Guide the user through their first meaningful action. This is where the aha moment typically occurs. For a project management tool, it might be creating and completing their first task. For an analytics tool, it might be seeing their first insight. Stage 4 - Habit formation: After the initial value moment, guide users toward the behaviors that will make them regular users. This might involve setting up notifications, creating recurring workflows, or integrating with tools they already use. Stage 5 - Team expansion: For collaborative products, guide activated users to invite teammates. This stage often happens after individual activation and is critical for PLG virality and account expansion. ## Components of an onboarding journey Welcome flow: Initial screens that set expectations and collect context for personalization. Setup steps: Required configuration like connecting data sources, inviting teammates, or setting preferences. Guided actions: In-product prompts that walk users through their first key workflow. Progress indicators: Checklists or progress bars that show users how close they are to completion. Contextual tooltips: Just-in-time help that appears when users encounter a feature for the first time. Empty states: Purposefully designed screens that appear when there is no data yet, guiding users toward their first action instead of showing a blank page. ## What makes a good onboarding journey Effective onboarding journeys are opinionated, short, and aligned to clear activation milestones. They avoid overwhelming users with every feature and instead focus on leading them to the first meaningful outcome. Great journeys are personalized—different roles, use cases, or plans may need different paths to value. ## Personalized vs generic onboarding journeys Generic onboarding shows every user the same flow regardless of their role, use case, or experience level. It is simpler to build and maintain but performs worse for most products because different users need different paths to value. Personalized onboarding uses context (collected during signup or inferred from behavior) to adapt the journey. A developer might skip the UI walkthrough and go straight to API documentation. A marketing manager might see templates relevant to campaign management. Light personalization can be achieved with a single question during signup ("What do you want to accomplish first?") that branches into 2-3 different journey paths. This gives 80% of the benefit with minimal engineering effort. Advanced personalization adapts the journey in real-time based on behavior. If a user skips a recommended step, the journey adapts. If a user completes something ahead of schedule, the next step appears immediately instead of waiting for a trigger. The tradeoff is clear: personalized journeys perform better but are more complex to build, test, and maintain. Start with light personalization and add complexity only when data shows which segments need different treatment. ## Measuring onboarding journey success Track completion rate for each step to identify where users drop off. Measure time-to-completion for the full journey and individual steps. Correlate journey completion with activation rate and retention to validate that your journey leads to the right outcomes. Calculate the "journey influence rate": what percentage of users who complete the onboarding journey activate vs. those who skip it or drop off partway through? This tells you how much value the journey adds. Monitor step-specific metrics: for each onboarding step, track the start rate, completion rate, average time, and error rate. Steps with low completion rates or high error rates are your top priorities for improvement. Run A/B tests on journey variations to isolate which changes improve activation. Test one variable at a time (step order, copy, number of steps) to build a clear understanding of what works. ## Examples - Analytics platform onboarding journey: An analytics tool uses a 4-stage onboarding journey: (1) Welcome screen asks "What do you want to track?" with options for web analytics, product analytics, or marketing analytics. (2) Based on the answer, the user sees a tailored setup flow—web analytics users get a code snippet to install, product analytics users get an SDK guide, and marketing users get a CSV import option. (3) Once data is flowing, the user is guided to create their first dashboard using a template matched to their use case. (4) After creating the dashboard, they are prompted to share it with a teammate or set up a daily email digest. This journey increased activation from 25% to 48%. - Project management tool onboarding journey: A project management SaaS segments users by team size at signup. Solo users see a minimal journey: create a project, add 3 tasks, complete one task. Small team users (2-10) see the solo journey plus an invitation step and a shared view tutorial. Enterprise trial users see a guided setup with SSO configuration, permission settings, and a scheduled onboarding call. By personalizing the journey, the team reduced time-to-activation from 4 days to 1 day for solo users and from 2 weeks to 4 days for enterprise trials. - Developer tool onboarding with progressive complexity: A CI/CD platform designed a progressive onboarding journey: Day 1 focuses on running the first build using a zero-config template. Day 2-3 introduces custom configuration through guided prompts. Day 4-7 suggests advanced features like parallel builds, caching, and deployment pipelines. Each stage is triggered by the user completing the previous one, not by calendar time. Users who move faster see the next stage sooner. This progressive approach reduced the Day-7 drop-off rate by 35% compared to the previous approach of showing all features on day one. ## Implementation notes - Design your journey around a single primary activation goal—do not try to teach every feature. - Make the journey skippable for power users who know what they are doing, but track skip rates to understand if you are losing users. - Test journey changes with cohort analysis: compare activation and retention for users who experienced different versions. - Build your onboarding journey as a configurable system, not hardcoded flows. This makes it easy to iterate on the journey without engineering changes for every experiment. - Send a follow-up email to users who start but do not complete the onboarding journey. Include a deep link that takes them back to exactly where they left off. ## FAQ Q: How long should an onboarding journey be? A: As short as possible while still leading to activation. For simple products, this might be 3–5 steps taking a few minutes. For complex B2B products, it might be a multi-day journey with distinct phases. The key is to get users to their first win quickly, then deepen engagement over time. Q: Should I use product tours or checklists for onboarding? A: Both can work, and many products use them together. Product tours are good for linear, must-do sequences. Checklists work better when users have flexibility in the order of steps. Test what works for your users—some prefer guidance, others prefer autonomy. Q: How do I know if my onboarding journey is working? A: Measure activation rate and time-to-value for users who complete the journey vs. those who do not. If journey completers activate at significantly higher rates, your journey is adding value. Also track step-by-step drop-off to identify friction points. Q: Should the onboarding journey be mandatory or optional? A: Make the core path strongly encouraged but not forced. Show the onboarding journey prominently and make it the default experience, but allow experienced users to dismiss or skip it. Track skip rates carefully—if more than 30-40% of users skip the journey, it may feel too long or irrelevant. If users who skip have lower activation rates, your journey is providing genuine value. Q: How often should I update the onboarding journey? A: Review your onboarding journey metrics monthly and make significant updates quarterly. Any time you ship a major feature, change your pricing, or observe a meaningful drop in activation rates, revisit the journey. Small copy and ordering changes can be tested continuously. Major structural changes (adding or removing stages, changing the activation goal) should be tested carefully with A/B tests before rolling out to all users. Q: What is the difference between onboarding and activation? A: Onboarding is the process; activation is the outcome. The onboarding journey is the structured set of steps you design to guide users toward value. Activation is the specific event or milestone that indicates a user has experienced enough value to become a long-term user. A good onboarding journey leads to activation, but they are not the same thing—users can activate without completing your formal onboarding, and users can complete onboarding without truly activating if the journey is not well-designed. --- # Feature adoption Source: https://www.skene.ai/resources/glossary/feature-adoption Category: Engagement Also known as: Feature usage, Feature engagement, Adoption rate, Adoption metrics The extent to which users discover, try, and consistently use specific features in your product. Feature adoption measures whether users are discovering and repeatedly using specific capabilities in your product. In product-led growth, feature adoption is a critical signal of whether users are progressing beyond initial activation into deeper value. High feature adoption indicates product-market fit for specific capabilities, while low adoption may signal discoverability issues, usability problems, or features that do not solve real user needs. Feature adoption data informs roadmap prioritization, pricing strategy, and expansion plays. ## Definition Feature adoption measures whether users are discovering and repeatedly using specific capabilities in your product. It encompasses the full lifecycle from awareness (knowing a feature exists) to trial (trying it once) to habitual use (using it repeatedly). ## The feature adoption funnel Awareness: Does the user know the feature exists? Discovery: Has the user found and opened the feature? Trial: Has the user tried the feature at least once? Repeat use: Is the user using the feature regularly? Habitual use: Has the feature become part of the user's regular workflow? ## Why feature adoption matters in PLG In PLG, feature adoption is a key signal of whether users are progressing beyond initial activation into deeper value. It also informs roadmap and pricing decisions, since heavily adopted features often justify premium plans or add-ons. Low-adoption features may indicate opportunities for better onboarding, improved UX, or features to deprecate. ## How to measure feature adoption You can measure feature adoption using both breadth (how many accounts use a feature) and depth (how often they use it over time). Segmenting adoption by plan, role, or acquisition channel helps you understand which features are driving value for which users. Track adoption curves over time to see if new users are adopting features faster than older cohorts (indicating product improvements). ## How to improve feature adoption Improve discoverability with contextual prompts, tooltips, and in-app announcements. Reduce friction in the feature itself—simplify the UI, provide templates, and offer sensible defaults. Create feature-specific onboarding for complex capabilities that require learning. ## Implementation notes - Distinguish between trial (used once) and adoption (used repeatedly)—one-time use does not indicate value. - Track time-to-adoption: how quickly do users discover and adopt key features after signup? - Use feature flags to gradually roll out features and measure adoption before full release. ## FAQ Q: What is a good feature adoption rate? A: It depends on the feature. Core features that are central to your value proposition should see 70–90% adoption among active users. Secondary features might see 20–50%. A feature that only 5% of users adopt may indicate a niche use case, poor discoverability, or something that should be reconsidered. Q: How do I identify which features to improve? A: Look for features with high awareness but low trial (discoverability or positioning issue), high trial but low repeat use (usability or value issue), or features that power users love but others ignore (potential for better onboarding). Q: Should I track feature adoption at the user or account level? A: Both matter. User-level adoption tells you about individual behavior. Account-level adoption (has anyone in the account used this feature?) matters for expansion and renewal conversations. For B2B products, account-level adoption is often more relevant for commercial decisions. --- # Expansion revenue Source: https://www.skene.ai/resources/glossary/expansion-revenue Category: Revenue Also known as: Upsell revenue, Account expansion, Revenue expansion, Land and expand revenue Revenue from existing customers through increased usage, seat expansion, upsells, or cross-sells. Expansion revenue is revenue from existing customers that comes from increased usage, additional seats, higher-tier plans, or cross-sold products. In product-led growth, expansion revenue is often the largest driver of growth because it comes from users who have already realized value and want more. Strong expansion revenue indicates product-market fit and a pricing model aligned with value delivery. Companies with high net revenue retention (NRR > 100%) generate more revenue from existing customers each year than they lose to churn, creating a compounding growth engine. ## Definition Expansion revenue is revenue from existing customers that comes from increased usage, additional seats, higher-tier plans, or cross-sold products. It is the opposite of contraction (downgrades) and churn (cancellations) in the revenue lifecycle. ## Types of expansion revenue Seat expansion: Adding more users to an existing plan. Usage expansion: Paying more due to increased consumption (for usage-based pricing). Upsells: Upgrading to a higher-tier plan with more features. Cross-sells: Purchasing additional products or add-ons. ## Expansion in a PLG motion In PLG, expansion is often driven by organic product usage: teams invite more teammates, integrate more data, or rely on your product in more workflows. The product itself creates the expansion opportunity—users hit limits, see value in premium features, or spread adoption across their organization. Sales teams in PLG focus on accounts that show expansion signals rather than cold outreach. ## How to measure expansion revenue You can measure expansion revenue by tracking net revenue retention (NRR), or by isolating revenue from upgrades, additional seats, and add-ons. Healthy PLG businesses often have strong expansion that more than offsets any logo churn over time. Track expansion by cohort to see if newer customers expand faster (indicating product improvements) or slower (potential pricing or adoption issues). ## How to drive expansion revenue Design pricing that naturally scales with value: seats, usage, or outcomes. Create clear upgrade paths with features that unlock at higher tiers. Use product signals to identify expansion-ready accounts and trigger outreach. Make it easy to add seats or upgrade without requiring a sales conversation. ## Implementation notes - Align pricing with value delivery—if customers can get 10x value without paying more, you are leaving expansion on the table. - Track leading indicators of expansion (approaching limits, inviting teammates, using premium features on trial) to predict and accelerate expansion. - Do not gate core value behind paid tiers—expansion should come from users wanting more, not from artificial restrictions. ## FAQ Q: What is a good expansion revenue benchmark? A: Top PLG companies often see expansion revenue that equals 20–40% of their starting ARR annually. This is reflected in NRR above 120%, meaning existing customers grow faster than any churn or contraction. However, benchmarks vary by segment and pricing model. Q: How is expansion revenue different from net revenue retention? A: Expansion revenue is the gross amount of additional revenue from existing customers. Net revenue retention (NRR) is a ratio that accounts for both expansion and contraction/churn. NRR = (Starting MRR + Expansion - Contraction - Churn) / Starting MRR. Q: Should expansion be product-led or sales-led? A: In PLG, small expansions (adding a few seats, minor upgrades) should be self-serve. Larger expansions (enterprise contracts, significant tier upgrades) often benefit from sales involvement. The key is using product signals to identify which accounts need sales attention vs. which will expand on their own. --- # Retention cohort Source: https://www.skene.ai/resources/glossary/retention-cohort Category: Metrics Also known as: Cohort analysis, User cohort, Retention analysis, Cohort retention A group of users or accounts who started using your product in the same period and whose retention you track over time. A retention cohort is a group of users or accounts who started using your product in the same time window, such as a week or month. Cohort analysis is essential for understanding how retention changes over time and whether product improvements are working. Unlike aggregate metrics that mix old and new users together, cohort analysis isolates the experience of each group, revealing trends that would otherwise be hidden. In PLG, cohort analysis is the primary way to measure whether changes to onboarding, pricing, or the product itself are improving user retention. Teams that adopt retention cohort analysis can pinpoint exactly when and why users churn, compare the effectiveness of different onboarding flows, and benchmark their performance against industry standards. Whether you are a startup trying to prove product-market fit or a growth-stage company optimizing expansion, retention cohorts give you the clearest picture of how your product is performing over time. ## Definition A retention cohort is a group of users or accounts who started using your product in the same time window, such as a week or month. You track each cohort over time to see what percentage are still active at Day 7, Day 30, Day 90, and beyond. ## How to read a retention cohort chart Rows represent cohorts (e.g., users who signed up in January, February, etc.). Columns represent time periods since signup (Week 1, Week 2, etc.). Each cell shows the percentage of that cohort still active at that time. Looking down a column shows whether newer cohorts retain better than older ones. The diagonal of a cohort chart shows the most recent data point for each cohort, which is useful for spotting sudden changes in behavior. ## Why retention cohorts matter in PLG Cohort analysis helps you see whether newer groups are retaining better or worse than older groups, especially after you change onboarding or pricing. Aggregate retention metrics can hide problems—if you are acquiring more users but they retain worse, aggregates might still look stable. Cohorts reveal the true trajectory of your retention and whether you are improving over time. ## Types of cohorts Time-based cohorts: Grouped by when users signed up (most common). Behavioral cohorts: Grouped by actions taken (e.g., users who completed onboarding vs. those who did not). Acquisition cohorts: Grouped by how users were acquired (organic vs. paid, channel-specific). Feature cohorts: Grouped by which features users adopted first, helping you understand which entry points lead to the best retention. ## Using retention cohorts to improve PLG By comparing cohorts before and after you change onboarding or pricing, you can see whether those changes improved or hurt retention. You can also use cohorts to identify segments that retain especially well and design more targeted PLG programs for them. Track cohort curves to understand when users typically churn—early drop-off indicates onboarding issues, later drop-off may indicate engagement problems. ## Common mistakes in cohort analysis Using cohorts that are too large or too small. If a cohort contains thousands of users across an entire quarter, you lose the ability to isolate the effect of specific changes. If a cohort has only a handful of users, the data is too noisy to draw conclusions. Ignoring cohort size when comparing retention rates. A cohort of 50 users with 40% Day-30 retention is not directly comparable to a cohort of 5,000 users with 35% retention—the smaller cohort has much wider confidence intervals. Only looking at time-based cohorts. Behavioral cohorts often reveal more actionable insights because they group users by what they did, not just when they signed up. Failing to define "active" clearly. If your definition of an active user is too loose (e.g., any login counts), your retention numbers will look better than reality. Tie your activity definition to value-driving actions. Not controlling for seasonality. Some months naturally have higher or lower engagement. Compare cohorts from similar periods or adjust for seasonal effects. ## Retention cohort benchmarks by industry SaaS products: Day-1 retention of 40-60% is typical, with Day-30 retention ranging from 15-30% for average products and 30-50% for strong products. Enterprise SaaS tends to retain better due to switching costs. Developer tools: Often see higher initial drop-off (Day-1 around 30-50%) but stronger long-term retention among activated users, with Day-90 retention of 20-40% for tools that become part of a workflow. Collaboration tools: Day-1 retention is often high (50-70%) because of team dynamics, but products need to reach a critical mass of team adoption to sustain retention beyond Day 30. Consumer-facing products: Typically see the steepest initial drop-off, with Day-1 retention of 25-40% and Day-30 retention of 10-20%. Top-performing consumer apps may reach 25-35% at Day 30. These benchmarks are general guidelines. Your retention targets should be based on your specific product category, user base, and business model. The most important metric is whether your cohorts are improving over time. ## Tools for retention cohort analysis Product analytics platforms such as Amplitude, Mixpanel, and PostHog offer built-in cohort analysis features that let you create time-based and behavioral cohorts, visualize retention curves, and compare cohorts side by side. For early-stage teams, a spreadsheet-based approach can work well. Export your event data, group users by signup week, and calculate the percentage still active at each interval. This is manual but helps you deeply understand the data. Data warehouse tools like BigQuery or Snowflake combined with a BI tool (Metabase, Looker, or Mode) give you the most flexibility for custom cohort definitions and advanced segmentation. Whichever tool you use, the key is consistency. Define your cohort parameters, activity definition, and time intervals once, and keep them stable so you can compare cohorts over time. ## Examples - Weekly signup cohort for a B2B SaaS tool: A project management tool groups users by their signup week. The January Week 1 cohort had 500 signups, with 60% active in Week 1, 35% in Week 2, 25% in Week 4, and 18% in Week 8. After redesigning the onboarding flow, the February Week 1 cohort showed 500 signups with 65% in Week 1, 42% in Week 2, 32% in Week 4, and 24% in Week 8. The improvement across every interval confirmed the new onboarding was working. - Behavioral cohort comparing activated vs. non-activated users: An analytics platform splits each monthly cohort into two groups: users who completed their first dashboard (activated) and those who did not. Activated users showed 55% Day-30 retention compared to 8% for non-activated users. This 7x difference justified investing heavily in reducing friction before the first dashboard creation. - Acquisition channel cohort analysis: A developer tool compared retention by acquisition source. Users from documentation and technical blog posts had 40% Day-30 retention, while users from paid social ads had only 12%. This insight shifted the marketing budget toward content-driven acquisition, improving overall retention and reducing cost per retained user by 60%. ## Implementation notes - Use weekly cohorts for fast-moving products and monthly cohorts for slower sales cycles. - Compare behavioral cohorts (activated vs. not) to quantify the impact of activation on retention. - Set up automated cohort reporting so you can spot trends early rather than discovering problems months later. - Create a "retention dashboard" that automatically updates with each new cohort so the team can review trends in weekly meetings. - When you ship a major onboarding change, tag the cohort so you can easily compare "before" and "after" groups months later. ## FAQ Q: What is a good retention curve shape? A: A healthy retention curve drops initially, then flattens out (stabilizes). If your curve keeps declining without flattening, you have a retention problem. The level where it flattens is your long-term retention rate. Products with strong product-market fit often see curves that flatten at 20–40% or higher. Q: How far back should I look at cohorts? A: At minimum, look at cohorts from the past 6–12 months. For trend analysis, you want enough cohorts to see patterns. For evaluating recent changes, compare the last 2–3 cohorts to earlier baselines. Q: Should I track user retention or revenue retention? A: Both matter for different reasons. User retention tells you about product engagement and stickiness. Revenue retention (NRR) tells you about the business impact. A product can have high user retention but low revenue retention if free users stay but paying users leave. Q: How often should I review my retention cohorts? A: Most teams benefit from reviewing cohort data weekly or biweekly. Weekly reviews help you catch sudden changes quickly, such as a broken onboarding flow or a regression from a product update. Monthly reviews are useful for spotting longer-term trends and evaluating the impact of strategic changes like pricing or packaging updates. Q: What is the difference between retention rate and churn rate? A: Retention rate and churn rate are two sides of the same coin. If your Day-30 retention rate is 35%, your Day-30 churn rate is 65%. Teams typically focus on retention rate because it frames the metric positively, but churn rate can be more useful when communicating urgency to stakeholders or calculating revenue impact. Q: Can I use retention cohorts to predict future revenue? A: Yes. If you know the average revenue per user and your retention curve shape, you can model the expected lifetime value (LTV) of a cohort. Multiply the number of users in the cohort by the retention rate at each interval and the average revenue per user to project future revenue. This is especially useful for forecasting and for justifying investment in retention improvements. --- # Aha moment Source: https://www.skene.ai/resources/glossary/aha-moment Category: Experience Also known as: Eureka moment, Magic moment, Value moment, First value experience The moment when a user first experiences clear, personal value from your product. The "aha moment" is the point where a user first experiences clear, personal value from your product and understands why it is useful. This emotional realization—when something "clicks" for the user—is the psychological foundation of activation and retention. While you cannot directly measure an emotional state, you can identify the actions that typically precede or accompany the aha moment and use them as proxies. In PLG, understanding and accelerating the path to the aha moment is one of the most important things you can do to improve activation and retention. Some of the most successful product companies in history have built their entire growth strategy around identifying and shortening the time to the aha moment. By studying what retained users did differently from churned users, you can reverse-engineer the behaviors that signal the aha moment and design your product experience to guide every new user toward that realization as quickly as possible. ## Definition The "aha moment" is the point where a user first experiences clear, personal value from your product and understands why it is useful. It is an emotional and cognitive shift—the user goes from skeptical or curious to genuinely seeing how the product fits into their life or work. The term originates from the German interjection "aha," expressing a sudden insight. In product development, it refers specifically to the moment a user transitions from evaluating a product to understanding its core value proposition through direct experience. ## Connection to activation In many products, the aha moment and activation event are closely related or identical, but not always; some users may have an aha moment before completing your formal activation criteria. The aha moment is emotional; activation is measurable. You use activation events as proxies for the aha moment because you cannot directly instrument feelings. ## Famous aha moment examples Slack discovered that teams who sent 2,000 messages were almost always retained. The aha moment was not about sending a single message—it was about reaching a threshold where the team realized Slack had replaced email for internal communication. Facebook famously identified that users who added 7 friends within their first 10 days were significantly more likely to become long-term active users. The aha moment was the realization that Facebook was where their actual social network lived. Dropbox found that users who saved their first file to the Dropbox folder had a dramatically higher retention rate. The aha moment came when users realized their file was instantly accessible on another device. Twitter discovered that following 30 accounts was the threshold. Once users curated a feed that was personally relevant, they understood why the platform was valuable. Zoom identified that hosting or joining the first video call with more than one participant was the aha moment—users immediately grasped the simplicity compared to other conferencing tools. ## Finding your product's aha moment You can identify likely aha moments by interviewing successful users and looking for the first time they felt the product "clicked" for them. Instrumentation around those moments, such as key feature usage or workflow completion, helps you validate and refine your hypothesis. Look for patterns in successful users: What did they do in the first session? What feature did they use most? What sequence of actions led to retention? ## Measuring aha moments with data Start by comparing the behavior of retained users (active at Day 30 or Day 60) with churned users. Look for actions or thresholds that appear significantly more often in the retained group. Use correlation analysis to identify which early actions (within the first 1-7 days) are the strongest predictors of long-term retention. The action with the highest correlation is likely your aha moment proxy. Test your hypothesis by running an experiment: guide a group of new users toward the suspected aha moment action and measure whether their retention improves compared to a control group. Be careful about confusing correlation with causation. Users who are naturally more engaged will do more of everything, so look for specific actions that matter disproportionately, not just total activity volume. Revisit your aha moment definition quarterly. As your product evolves and your user base changes, the action that best predicts retention may shift. ## Common mistakes when defining aha moments Defining the aha moment too broadly. "Using the product" is not an aha moment. It needs to be a specific, measurable action or threshold that correlates with retention. Assuming every user has the same aha moment. Different personas often have different paths to value. A developer using an API platform has a different aha moment than a product manager using the same platform through a dashboard. Optimizing for the aha moment proxy instead of the underlying value. If your aha moment is "creating the first dashboard," do not trick users into creating empty dashboards. The goal is genuine value realization, not metric manipulation. Setting the bar too high. If your aha moment requires hours of setup, most users will never reach it. Either simplify the path or find an earlier, lighter action that still correlates with retention. Not validating with qualitative data. Numbers can tell you what happened, but user interviews tell you why. Combine quantitative analysis with direct user feedback to confirm your aha moment hypothesis. ## Accelerating the path to aha Reduce friction before the aha moment—every extra step is an opportunity for users to drop off before experiencing value. Use opinionated defaults and templates so users see value before doing heavy configuration. Guide users toward the aha moment with targeted onboarding that focuses on one key outcome. Pre-populate the product with sample data or demo content so users can experience the value immediately, even before they bring their own data. Use progressive disclosure to hide advanced features and focus new users on the shortest path to the aha moment. ## Examples - Analytics platform aha moment: A product analytics tool found that users who created their first custom event query within the first session had 4x higher 30-day retention than users who only viewed pre-built reports. The team redesigned onboarding to walk every new user through creating a custom query with their own data, reducing median time-to-aha from 3 days to 20 minutes. - Collaboration tool aha moment: A team workspace product discovered that the aha moment was not about the individual user experience at all—it was about having at least three team members active in the same workspace. Solo users churned at 80% within 30 days, while users in workspaces with 3+ active members retained at 65%. The team shifted onboarding to prioritize team invitations before any other setup step. - Developer tool aha moment: A CI/CD platform identified that developers who saw their first successful build pass had 3x higher retention than those who only configured the tool. Many users were dropping off during complex YAML configuration. The team introduced a "zero-config" starter template that triggered a successful build within 2 minutes of signup, dramatically increasing the percentage of users who reached the aha moment. ## Implementation notes - Interview churned users to understand what aha moment they never reached. - The aha moment may be different for different personas—a developer's aha moment may differ from a product manager's. - Use session replay tools to watch how successful users reach their aha moment and identify friction points. - Create a "time to aha" metric and track it as a key product health indicator. Measure the median time from signup to the aha moment action and work to reduce it each quarter. - Map the steps between signup and the aha moment, then count how many users drop off at each step. The step with the highest drop-off rate is your biggest opportunity. ## FAQ Q: How do I measure the aha moment if it is emotional? A: You cannot measure emotions directly, but you can identify behavioral proxies. Find the actions that correlate most strongly with retention and use those as your measurable activation event. The aha moment is the concept; the activation event is the metric. Q: Is the aha moment the same as activation? A: They are related but not identical. The aha moment is the psychological shift when a user "gets it." Activation is the measurable action you use as a proxy. In practice, they often occur around the same time, but the aha moment may happen before or after the formal activation event. Q: Can a product have multiple aha moments? A: Yes. Different user personas may have different aha moments based on their use case. Users may also have secondary aha moments as they discover new features over time. However, focus on the primary aha moment that matters most for activation and retention. Q: How long should it take users to reach the aha moment? A: The shorter, the better. For most SaaS products, the aha moment should happen within the first session or within the first day. If it takes longer than a week, most users will churn before experiencing it. Track your "time to aha" metric and work to reduce it through better onboarding, sample data, and reduced setup friction. Q: What if our product is too complex for a quick aha moment? A: Even complex products can create an early aha moment by breaking the value proposition into smaller pieces. Instead of requiring users to complete a full implementation before seeing value, find a lightweight way to demonstrate the core benefit. For example, an enterprise data platform might show users instant insights from sample data before requiring them to connect their own data sources. --- # Self-serve signup Source: https://www.skene.ai/resources/glossary/self-serve-signup Category: Acquisition Also known as: Self-service signup, Free signup, Product signup, Self-serve registration A signup flow where users can start using the product without needing to talk to sales first. Self-serve signup is a flow where users can create an account and start using the product without human intervention from sales or support. In product-led growth, self-serve signup is the entry point to your PLG funnel—it removes the friction of scheduling demos or talking to salespeople, letting users experience your product on their own terms. A well-designed self-serve flow is fast (under 60 seconds to first screen), low-friction (minimal required fields), and leads directly into a guided onboarding journey. ## Definition Self-serve signup is a flow where users can create an account and start using the product without human intervention from sales or support. It enables users to try the product immediately, which is a core requirement for product-led acquisition. ## Role in product-led growth Self-serve signup is foundational for many PLG motions because it reduces friction at the top of the funnel and feeds a steady stream of potential product-qualified leads. It also sets the tone for the product relationship—users expect the same frictionless experience throughout their journey. Without self-serve signup, you cannot run a true PLG motion because users are blocked from experiencing the product. ## Best practices for self-serve signup Reduce the number of required fields to the minimum needed to get users into the product, and collect additional context later through progressive profiling. Offer modern authentication options such as SSO or OAuth where appropriate, and make sure the first screen after signup leads directly into a meaningful onboarding journey. Avoid email verification walls before first value—let users in immediately and verify later. ## Measuring self-serve signup success Signup completion rate: What percentage of users who start signup actually complete it? Time to signup: How long does it take from landing page to first authenticated screen? Signup to activation rate: What percentage of signups reach activation? ## Implementation notes - Test your signup flow regularly—try signing up as if you were a new user and note every friction point. - Avoid asking for credit cards upfront unless your product requires payment to function. - Use social/OAuth login options to reduce friction, but always offer email signup as a fallback. ## FAQ Q: Should I require email verification before letting users in? A: Ideally, no. Let users into the product immediately and verify their email in the background or after they have experienced value. Blocking users at the verification step loses a significant percentage of signups who never complete the flow. Q: What fields should I require at signup? A: At minimum: email and password (or OAuth). You can collect additional information (company, role, use case) through progressive profiling after the user is in the product and has seen value. Every additional required field at signup reduces conversion. Q: How do I balance self-serve signup with lead capture for sales? A: Let everyone self-serve and qualify them based on product usage (PQLs) rather than form fills. You can still ask optional questions during onboarding to enrich lead data, but do not gate the product behind forms. --- # PLG funnel Source: https://www.skene.ai/resources/glossary/plg-funnel Category: Funnels Also known as: Product-led funnel, Self-serve funnel, PLG conversion funnel, User funnel A funnel that tracks how users move from acquisition to activation, engagement, and expansion in a product-led motion. A PLG funnel is a structured view of how users move from discovering your product to activating, engaging, and eventually expanding their usage. Unlike traditional marketing or sales funnels that track form fills and meetings, a PLG funnel is anchored in in-product milestones and behaviors. It measures what users actually do in the product, making it a more accurate representation of the customer journey in a product-led motion. Understanding and optimizing each stage of your PLG funnel is essential for driving self-serve growth. The PLG funnel gives product, marketing, and sales teams a shared language for discussing where users succeed and where they struggle. By instrumenting each stage and measuring conversion rates between them, you can identify the highest-leverage opportunities for growth and prioritize your roadmap around the bottlenecks that matter most. ## Definition A PLG funnel is a structured view of how users move from discovering your product to activating, engaging, and eventually expanding their usage. Instead of focusing only on marketing or sales stages, a PLG funnel is anchored in in-product milestones and behaviors. ## Common stages in a PLG funnel Typical PLG funnel stages include: acquisition, signup or self-serve signup, onboarding, activation, feature adoption, and expansion. Many teams also add intermediate stages such as "aha moment" or PQL creation, depending on their product. Some products add a "habit" stage between activation and expansion to track when users become regular users. ## PLG funnel vs traditional sales funnel Traditional funnels: Lead → MQL → SQL → Opportunity → Closed Won. Stages are based on marketing and sales activities. PLG funnels: Visitor → Signup → Activated → Engaged → PQL → Expanded. Stages are based on product usage. PLG funnels put the product at the center, measuring whether users are actually getting value. In a traditional funnel, a "qualified" lead is someone who expressed interest through a form fill or took a sales call. In a PLG funnel, a qualified lead is someone who has demonstrated value realization through product usage. Traditional funnels are controlled by marketing and sales teams. PLG funnels require product, engineering, and growth teams to collaborate because the product itself is the primary conversion mechanism. The PLG funnel does not eliminate the need for sales—it changes when and how sales gets involved. Instead of creating interest from scratch, sales engages users who have already experienced value and are ready to expand. ## Key metrics at each funnel stage Acquisition: Track visitor volume, traffic sources, and cost per visitor. Understand which channels bring users who are most likely to activate, not just the most total visitors. Signup: Measure signup conversion rate (visitors to signups), time to complete signup, and signup drop-off by step. Each additional form field or verification step reduces conversion. Onboarding: Track onboarding completion rate, time to complete onboarding, and step-by-step drop-off rates. Identify which steps lose the most users. Activation: Measure activation rate (signups who reach the activation event), time to activation, and the correlation between activation and long-term retention. Engagement: Track daily/weekly/monthly active users, feature adoption breadth, and usage frequency. Look for the habits that indicate a user has integrated the product into their workflow. Expansion: Measure PQL conversion rate, expansion revenue per account, and time from activation to expansion. This is where PLG revenue growth happens. ## Common PLG funnel leaks and how to fix them Leak: High visitor-to-signup drop-off. Fix: Simplify your signup flow, add social login options, remove unnecessary form fields, and ensure your landing page clearly communicates the value users will get after signing up. Leak: Users sign up but never start onboarding. Fix: Send a well-timed welcome email within minutes, use in-app prompts to guide first actions, and reduce the gap between signup and the first meaningful action. Leak: Users start onboarding but drop off before activation. Fix: Shorten the onboarding path, use progress indicators to show how close users are to completion, offer skip options for non-essential steps, and provide sample data so users can experience value before committing their own data. Leak: Users activate but do not return. Fix: Set up re-engagement triggers (email, push notifications) based on inactivity, investigate whether the activation event truly correlates with long-term value, and look for missing "habit loops" that would bring users back regularly. Leak: Engaged users do not convert to paid. Fix: Review your pricing and packaging to ensure the free-to-paid boundary aligns with natural expansion points, surface upgrade prompts at moments of value rather than arbitrary limits, and ensure sales has visibility into PQL signals. ## Measuring your PLG funnel Track conversion rates between each stage to identify bottlenecks. Measure time spent in each stage to identify where users stall. Segment funnel metrics by acquisition channel, use case, and plan to understand which paths work best. Build a single dashboard that shows the full funnel with conversion rates between each stage. Review it weekly as a cross-functional team. ## Examples - B2B SaaS PLG funnel example: A project management tool tracks: Website Visitor (100%) → Signup (5%) → Onboarding Started (85% of signups) → Activated (created first project with 2+ tasks, 40% of signups) → Engaged (active 3+ days in first week, 22% of signups) → PQL (3+ team members active, 8% of signups) → Paid (3% of signups). The biggest drop-off is between onboarding started and activation, so the team focuses on reducing friction in project creation. - Developer tool PLG funnel example: An API platform tracks: Docs Visitor → Signup (8%) → API Key Created (70% of signups) → First API Call (45% of signups) → Production Integration (15% of signups) → Rate Limit Hit / PQL (6% of signups) → Paid Plan (4% of signups). The funnel reveals that the biggest leak is between "first API call" and "production integration," suggesting that the transition from sandbox to production needs better documentation and tooling. - Collaboration tool PLG funnel with viral loop: A team communication platform adds a viral loop to its funnel: Visitor → Signup → Invite Sent (60% of signups) → Teammate Joined (35% of signups) → Team Active (3+ members, 20% of signups) → Workspace PQL (daily usage + integration, 10% of signups) → Paid (5% of signups). Each invited teammate becomes a new top-of-funnel visitor, creating a compounding growth loop that reduces customer acquisition cost over time. ## Implementation notes - Define clear, measurable events for each funnel stage before building dashboards. - Start with a simple 4–5 stage funnel and add complexity only when you have clear hypotheses to test. - Review funnel metrics weekly to spot trends early. - Assign an owner to each funnel stage so there is clear accountability for improving conversion at that step. - Use cohort analysis within each funnel stage to track whether conversion rates are improving over time, not just looking at aggregate numbers. ## FAQ Q: How is a PLG funnel different from a sales funnel? A: A sales funnel tracks interactions with sales (calls, demos, proposals). A PLG funnel tracks interactions with the product (signup, activation, usage). In PLG, the product is the primary driver of conversion, so the funnel reflects product usage rather than sales activity. Q: What is a good conversion rate for each PLG funnel stage? A: Benchmarks vary by product type, but rough guidelines: Visitor to Signup (2–10%), Signup to Activated (20–40%), Activated to Engaged (40–60%), Engaged to Paid (5–20%). Focus on improving your baseline rather than hitting universal benchmarks. Q: How do I identify where users drop off in my PLG funnel? A: Calculate conversion rates between each stage. The biggest drop-off is your biggest opportunity. Then dig deeper: segment by acquisition channel, use case, and behavior to understand why users are dropping off at that stage. Q: Should I optimize the top or bottom of the PLG funnel first? A: Generally, start from the bottom and work up. Improving activation and engagement rates means every new signup is more likely to convert. If you optimize acquisition first but have a leaky funnel, you are just pouring more users into a broken experience. Once your activation-to-paid path is solid, scaling acquisition becomes much more efficient. Q: How do I handle users who skip stages in the PLG funnel? A: Not every user follows a linear path. Some power users may skip onboarding entirely and activate immediately. Others may bounce between stages. Design your funnel tracking to handle non-linear paths by focusing on whether each milestone was reached, regardless of the order. Use the funnel as a framework for identifying bottlenecks, not as a rigid sequence every user must follow. Q: Can a PLG funnel work alongside a sales-led funnel? A: Yes, and many successful companies run both in parallel. Users who self-serve follow the PLG funnel, while enterprise prospects who need custom contracts follow a sales-led funnel. The two funnels often converge when self-serve users become PQLs and are handed to sales for expansion conversations. The key is having clear definitions for when a user transitions from the PLG funnel to the sales funnel. --- # Free trial vs freemium Source: https://www.skene.ai/resources/glossary/free-trial-vs-freemium Category: Pricing Also known as: Free trial, Freemium model, Freemium vs trial, PLG pricing models Two common PLG entry models: time-limited free trials and always-free freemium tiers, each with different tradeoffs. Free trials and freemium are the two most common entry models in product-led growth. Free trials give users full or nearly full access for a limited time before requiring payment. Freemium models provide an always-free tier with constraints such as limited seats, features, or usage. Each has distinct tradeoffs: trials create urgency but require fast time-to-value; freemium drives broader adoption but requires careful limit design. Many PLG companies use hybrid models that combine elements of both. ## Definition Free trials give users full or nearly full access to the product for a limited time window before requiring payment. Freemium models provide an always-free tier with constraints such as limited seats, features, or usage. Some products offer both: a free tier plus a trial of premium features. ## When to use free trials When your product requires significant setup and you want users to complete it before the trial ends. When your value proposition is clear quickly but ongoing value requires payment. When you want to create urgency and a clear decision point. ## When to use freemium When your product benefits from network effects or viral growth. When time-to-value is long and users need extended evaluation periods. When you want to build a large user base for community, content, or brand awareness. ## Tradeoffs between free trial and freemium Free trials concentrate evaluation into a shorter time period and can create clearer upgrade moments, but they require users to move fast. Freemium models can drive wider top-of-funnel adoption and long-tail usage, but they require careful limits so that value is real while still leaving room for paid expansion. Trials may lose users who are not ready to decide; freemium may create long-term freeloaders who never convert. ## Hybrid models Reverse trial: Users start with full access, then transition to a free tier if they do not convert. Freemium + trial: A free tier with a time-limited trial of premium features. Opt-in trial: A free tier where users can request a trial of premium features when ready. ## How Skene fits into free trial and freemium decisions Skene can help you see how activation, time-to-value, and feature adoption differ between free and paid cohorts. Those insights make it easier to decide where to put limits, which milestones to gate, and when to prompt upgrades. ## Implementation notes - If choosing trials, make sure time-to-value is shorter than trial length—users need to experience value before deciding. - If choosing freemium, design limits that let users experience real value while creating natural upgrade moments. - Track conversion rates for both models and do not be afraid to experiment with different approaches. ## FAQ Q: What is a reverse trial? A: A reverse trial starts users with full premium access, then transitions them to a free tier if they do not convert. This lets users experience the full product upfront and understand what they would lose by not paying, rather than what they would gain. Q: How do I set the right trial length? A: Your trial should be long enough for users to reach activation and experience value, but short enough to create urgency. Analyze how long it takes your successful users to activate and add a buffer. Common trial lengths are 7, 14, or 30 days. Q: How do I design good freemium limits? A: Good limits let users experience real value (not just a demo) while creating natural upgrade moments. Limit by usage (number of projects, events, storage), seats (team size), or features (advanced capabilities). Avoid limits that make the free tier feel broken or useless. --- # North star metric Source: https://www.skene.ai/resources/glossary/north-star-metric Category: Metrics Also known as: NSM, North star, Primary metric, One metric that matters, OMTM A single metric that best captures the long-term value your product creates for customers and your business. A north star metric is the primary metric your team uses to represent delivered customer value over time. It should be tightly linked to both customer outcomes and business outcomes, not just short-term activity. In PLG, a clear north star metric helps align product, growth, and sales teams around the same definition of success. The north star is supported by input metrics (leading indicators like activation and engagement) that teams can directly influence. ## Definition A north star metric is the primary metric your team uses to represent delivered customer value over time. It should be tightly linked to both customer outcomes and business outcomes, not just short-term activity. The north star is not a vanity metric—it represents real value delivered to customers that correlates with revenue. ## Examples of north star metrics Slack: Messages sent per week (represents team communication value). Airbnb: Nights booked (represents value to both hosts and guests). Figma: Weekly active editors (represents design collaboration value). Your north star should capture the core value exchange between your product and your users. ## The role of a north star metric in PLG In PLG, a clear north star metric helps align product, growth, and sales teams around the same definition of success. Activation, time-to-value, and expansion metrics often support the north star metric as leading indicators. Teams can focus on improving input metrics that ladder up to the north star. ## How to choose your north star metric It should measure value delivered, not just activity (e.g., "analyses run" not "logins"). It should be measurable and movable—your team should be able to influence it. It should be leading (or at least correlated with) revenue and retention. ## Using Skene to support your north star metric Skene turns onboarding journeys and milestones into measurable events, which can be rolled up into your north star metric. As you refine your journeys, Skene keeps the measurement layer in sync so your north star stays grounded in real usage. ## Implementation notes - Your north star should be stable over time—do not change it frequently or teams lose focus. - Break the north star into input metrics that different teams can own and influence. - Review north star progress weekly or monthly, not daily—it is a lagging indicator. ## FAQ Q: How is a north star metric different from a KPI? A: KPIs are any key performance indicators your team tracks. The north star is your single most important metric that represents customer value. You might have many KPIs, but only one north star. KPIs often serve as input metrics that ladder up to the north star. Q: Can I have multiple north star metrics? A: By definition, no—the north star is your single most important metric. If you have multiple, you do not have a north star. However, you can have different north stars for different product lines or business units if they serve distinct customer needs. Q: How often should I change my north star metric? A: Rarely. Your north star should be stable for at least a year. Changing it frequently creates confusion and makes it hard to track progress. Only change it if your business model fundamentally changes or you realize your current north star does not correlate with customer value. --- # Product usage signal Source: https://www.skene.ai/resources/glossary/product-usage-signal Category: Signals Also known as: Usage signal, Product signal, Behavioral signal, In-product signal A behavioral pattern in your product data that indicates customer intent, health, or risk. A product usage signal is a pattern in how users interact with your product that tells you something meaningful about their intent or health. In PLG, product usage signals replace traditional sales signals (like email opens or meeting requests) as the primary way to understand customer intent. Positive signals indicate expansion readiness or high engagement; negative signals indicate churn risk or disengagement. The ability to capture, interpret, and act on product usage signals is a core PLG capability. ## Definition A product usage signal is a pattern in how users interact with your product that tells you something meaningful about their intent or health. Examples include hitting usage limits, repeatedly using a key feature, or suddenly dropping activity. Signals can be events (something happened) or trends (a pattern over time). ## Types of product usage signals Intent signals: Visiting pricing pages, exploring premium features, hitting usage limits. Engagement signals: Login frequency, feature breadth, depth of usage. Risk signals: Declining activity, abandoned onboarding, decreasing use of core features. Expansion signals: Adding teammates, using more of a usage-based resource, adopting new features. ## Examples of product usage signals in PLG Positive signals: completing onboarding journeys, inviting teammates, or consistently returning to core workflows. Risk signals: declining logins, abandoned onboarding, or decreasing use of value-driving features. A user who completes onboarding, invites two teammates, and logs in three days in a row is signaling engagement. ## How to use product usage signals PQL scoring: Combine signals to identify accounts ready for sales outreach. Health scoring: Aggregate signals into a customer health score for success teams. Automated workflows: Trigger emails, in-app messages, or alerts based on signal patterns. Churn prediction: Use declining signals as leading indicators of churn risk. ## Product usage signals and Skene Skene ties journeys and milestones directly to analytics so you can define and monitor product usage signals with less custom wiring. Those signals can then power PQL models, health scores, and proactive success playbooks. ## Implementation notes - Start with a few high-value signals rather than tracking everything—signal noise is a real problem. - Define clear thresholds for signals (e.g., "logged in 3+ times this week" rather than "active"). - Validate signals by checking if they actually correlate with the outcomes you care about (conversion, retention, churn). ## FAQ Q: How do I identify which signals matter for my product? A: Start by analyzing your best customers (highest retention, highest NRR) and identify what they did early in their journey. Then look at churned customers and identify what they did not do. The behaviors that differentiate these groups are your most important signals. Q: How many signals should I track? A: Start with 5–10 key signals that cover intent, engagement, and risk. Too many signals create noise and make it hard to prioritize. You can always add more as you learn which signals actually predict outcomes. Q: How do I act on product usage signals? A: Build workflows that route signals to the right teams. Expansion signals go to sales, risk signals go to customer success, and engagement signals might trigger in-app messages or email campaigns. The key is to have clear playbooks for each signal type. --- # Onboarding checklist Source: https://www.skene.ai/resources/glossary/onboarding-checklist Category: Onboarding Also known as: Setup checklist, Getting started checklist, Onboarding tasks, Setup wizard A visible list of key steps new users should complete to reach activation in your product. An onboarding checklist is a set of clearly listed steps that helps new users understand what they need to do to get value from your product. It usually appears in-product and tracks progress toward an activation milestone. Checklists are one of the most effective onboarding patterns in PLG because they provide clarity, create a sense of progress, and leverage completion psychology to motivate users. Well-designed checklists can significantly improve activation rates and time-to-value. ## Definition An onboarding checklist is a set of clearly listed steps that helps new users understand what they need to do to get value from your product. It usually appears in-product and tracks progress toward an activation milestone. Unlike product tours, checklists let users work at their own pace and in their preferred order (if steps are independent). ## Why checklists work Completion effect: Users are motivated to finish what they start—progress bars and checkmarks create momentum. Clarity: Checklists reduce uncertainty by showing exactly what needs to be done. Autonomy: Users can choose their own path rather than being forced through a linear tour. ## Best practices for onboarding checklists Good onboarding checklists are short, outcome-focused, and tailored to specific roles or use cases. They avoid vanity steps and focus on actions that strongly correlate with activation and retention. Aim for 3–7 items. More than 7 feels overwhelming; fewer than 3 may not provide enough structure. ## Designing effective checklist items Each item should be a clear action, not a vague goal (e.g., "Connect your GitHub repo" not "Set up integrations"). Items should be achievable in a single session—break complex tasks into smaller steps. Consider adding estimated time for each item to set expectations. ## Measuring checklist effectiveness Track completion rate for each item and the overall checklist. Measure drop-off between items to identify friction points. Compare activation rates and retention for users who complete the checklist vs. those who do not. ## Onboarding checklists and Skene Skene generates onboarding journeys and can expose them as checklists that stay in sync with your underlying product flows. Because checklists are tied to milestones and analytics, you can see how completing each step affects activation and time-to-value. ## Implementation notes - Pre-check items the user has already completed to give them a head start and show the checklist is personalized. - Consider making the checklist dismissible but easy to find again—some users prefer to explore on their own. - Test different orderings of items to see which sequence leads to highest completion and activation. ## FAQ Q: How many items should be in an onboarding checklist? A: Aim for 3–7 items. Research shows that 5 is often optimal—enough to provide structure without overwhelming. If you have more steps, consider breaking them into phases (e.g., "Getting Started" and "Going Deeper"). Q: Should I force users to complete the checklist? A: No. Make the checklist visible and encouraged, but let users dismiss it if they prefer to explore on their own. Forcing completion creates friction for experienced users. Track who completes vs. dismisses to understand different user segments. Q: When should the checklist disappear? A: There are two schools of thought: hide it once all items are complete, or hide it after the user reaches activation (even if some items remain). The latter respects that some users may not need every step to get value. --- # Activation rate Source: https://www.skene.ai/resources/glossary/activation-rate Category: Metrics Also known as: Activation metric, Activation percentage, Activation conversion rate The percentage of new users or accounts that reach your defined activation milestone within a given time window. Activation rate measures what share of new users or accounts actually reach your agreed definition of "activated." It connects your onboarding and product experience to business results by showing how effectively you turn signups into meaningful value. In a PLG motion, improving activation rate is often one of the highest-leverage ways to grow revenue and retention. ## What is activation rate? Activation rate is the percentage of new users or accounts that reach a predefined activation milestone within a chosen time window, such as the first 7 or 30 days. It focuses on whether users actually experience the first meaningful outcome in your product, not just whether they sign up or log in once. ## How to define activation rate correctly Start by agreeing on a concrete activation event that clearly represents real value, such as "created and shared a dashboard" or "deployed first integration into production." Choose a time window that reflects how long it should reasonably take a motivated user to get to that milestone; many teams start with 7, 14, or 30 days and then refine from there. ## How to measure and segment activation rate Compute activation rate as: activated users ÷ new signups in the same cohort window, typically grouped by week or month of signup. Segment activation rate by acquisition channel, role, plan, and product surface so you can see where your onboarding works well and where it fails. ## How Skene helps you improve activation rate Skene infers candidate activation milestones from your codebase and maps them to journeys and events, so you can measure activation rate without building custom tracking from scratch. As you adjust onboarding content or flows, Skene keeps the underlying milestones and dashboards in sync, making it easier to see which changes actually move activation up or down. ## Implementation notes - Pick a single primary activation definition and resist the urge to track many competing versions at once. - Always pair activation rate with time-to-value so you do not hide slow, painful onboarding behind a single percentage. - Track activation rate by cohort to see if newer users are activating better than older cohorts. ## FAQ Q: What is a good activation rate for SaaS? A: Activation rates vary widely. B2B SaaS products often see 20–40%, while consumer products may range from 10–30%. More important than hitting a benchmark is measuring your baseline and improving it over time. A 5% improvement in activation rate can significantly impact revenue. Q: How do I improve my activation rate? A: Start by identifying where users drop off before activation. Reduce friction in onboarding, shorten time-to-value with better defaults, and ensure your activation milestone is achievable in a reasonable time. Segment your data to find which user groups activate best and learn from their behavior. Q: What time window should I use for measuring activation rate? A: Choose a window that reflects your product complexity. Simple products might use 7 days, mid-complexity products 14–30 days, and complex enterprise products 30–60 days. The window should be long enough for motivated users to activate but short enough to drive urgency. --- # Customer health score Source: https://www.skene.ai/resources/glossary/customer-health-score Category: Retention Also known as: Account health score, Customer health index, CHS, Health score A composite metric that summarizes how likely a customer is to renew, expand, or churn based on product usage and contextual signals. A customer health score is an opinionated, weighted index of signals that indicate how likely an account is to renew, expand, or churn. In PLG, health scores rely heavily on product usage data rather than just subjective sentiment. A good health score clarifies where to focus your success and sales efforts instead of treating all customers the same. ## What is a customer health score? A customer health score is a single metric, often on a 0–100 or red/amber/green scale, that represents how healthy a customer relationship is. It combines multiple signals such as product usage, adoption breadth, support tickets, NPS, and commercial data (e.g. contract value or tenure) into one view. ## Common components of a health score Product usage: frequency of logins, depth of feature adoption, completion of key onboarding journeys. Business value: outcomes achieved, active projects, usage of value-driving features, and realized ROI. Engagement: support interactions, stakeholder participation, and responsiveness to success outreach. ## Customer health scores in a PLG motion In PLG, health scores should be strongly anchored in product behavior rather than subjective opinions alone. A well-calibrated health score lets you prioritize success outreach, renewal conversations, and expansion plays based on real usage and outcomes. ## How Skene feeds better health scores Skene automatically measures completion of journeys, milestones, and key PLG metrics like activation, time-to-value, and feature adoption. Those signals can be used directly as inputs into your health score, making it easier to keep the score aligned with what actually drives retention and expansion. ## Implementation notes - Start with a simple model (e.g., a handful of weighted signals) and validate it against historical churn and expansion before adding complexity. - Make sure every component of the score is measurable, up to date, and clearly understood by sales and success teams. - Review and recalibrate your health score quarterly—what predicts churn may change as your product and customer base evolve. ## FAQ Q: How do I build a customer health score? A: Start by identifying 3–5 signals that correlate with retention and expansion (e.g., login frequency, feature adoption, support tickets). Weight each signal based on its predictive power, validated against historical churn data. Start simple and add complexity only when you have evidence it improves predictions. Q: What should I do with customers who have low health scores? A: Low health scores should trigger proactive outreach from customer success. The goal is to understand why engagement is low and help the customer get back on track. This might mean re-onboarding, addressing product issues, or realigning on goals. Sometimes low health indicates poor fit, and it is better to address that directly. Q: How often should I update health scores? A: For most B2B SaaS products, daily or weekly updates work well. Real-time updates can create noise and make it hard to spot trends. The key is that scores should be fresh enough to be actionable—if a customer's health drops, you want to know before it is too late to intervene. --- # Onboarding friction Source: https://www.skene.ai/resources/glossary/onboarding-friction Category: Onboarding Also known as: Onboarding drop-off, Onboarding bottlenecks, Setup friction, First-run friction Any obstacle, delay, or confusion that slows a new user or account from reaching their first meaningful outcome. Onboarding friction describes all the moments where new users lose momentum on their way to activation: confusing steps, missing data, long forms, or technical blockers. In PLG, removing onboarding friction is one of the fastest ways to improve activation rate and time-to-value, because it directly addresses where users stall or drop off. ## What is onboarding friction? Onboarding friction is any point in the new-user journey where progress toward activation becomes slower, confusing, or blocked. Friction can be UX-related (too many fields, unclear copy), technical (integration failures, permission issues), or organizational (needing approvals or help from others). ## Common examples of onboarding friction Long signup forms that ask for information users do not yet have or are not ready to share. Mandatory configuration steps that require engineering time or admin access before a user can see any value. Unclear instructions or dead ends in a multi-step journey that leave users unsure what to do next. ## How to measure onboarding friction Instrument each step in your onboarding journeys and look for sharp drop-offs or long delays between steps. Track time-to-value and activation rate by segment to identify which personas or channels experience the most friction. ## How Skene helps reduce onboarding friction Skene turns your codebase into explicit journeys and milestones, making it easier to see exactly where users stall. You can experiment with simplified journeys, reordered steps, or automated guidance and immediately see whether time-to-value and completion rates improve. ## Implementation notes - Start by instrumenting and visualizing your current onboarding journeys before rewriting them; the goal is to understand where friction actually is, not just guess. - Treat onboarding friction as a continuous optimization problem, not a one-time project—new product work often introduces new friction. - Use session replay tools to watch real users struggle—you will often spot friction that analytics alone would miss. ## FAQ Q: How do I identify onboarding friction in my product? A: Instrument each step of your onboarding and look for drop-offs or long delays between steps. Watch session recordings of users who abandon onboarding. Survey users who did not complete onboarding to understand why. Compare successful activators to non-activators to identify the steps that differentiate them. Q: What are the most common causes of onboarding friction? A: Common causes include: too many required fields at signup, mandatory integrations before value, unclear next steps, technical errors or permission issues, waiting for external dependencies (e.g., data imports), and too much information presented at once. Q: How much can reducing friction improve activation? A: It varies, but removing major friction points often improves activation by 20–50%. Even small improvements—reducing a form from 5 fields to 3, or adding a loading indicator—can have measurable impact. The key is to measure before and after each change. --- # Trial-to-paid conversion rate Source: https://www.skene.ai/resources/glossary/trial-to-paid-conversion-rate Category: Funnels Also known as: Trial conversion rate, Free-to-paid conversion, Trial conversion, Paid conversion rate The percentage of trial signups that become paying customers within or shortly after the trial period. Trial-to-paid conversion rate measures how effectively your free trial turns signups into paying customers. In a PLG model, it captures how well your onboarding, product experience, and pricing work together to get users from "trying" to "buying." A healthy trial-to-paid rate signals that your trial design, activation milestones, and upgrade prompts are aligned with real value. ## What is trial-to-paid conversion rate? Trial-to-paid conversion rate is the percentage of users or accounts that start a free trial and end up on a paid plan within a defined period (often by trial end plus a short grace window). It is a core PLG funnel metric because it connects the top of your trial funnel to actual revenue. ## What drives trial-to-paid conversion? Clear value discovery during the trial, ideally by hitting a well-chosen activation milestone early. Fair, understandable pricing and upgrade paths that make it easy to say "yes" once value is proven. Timely prompts and success outreach that help users over the last mile from trial to decision. ## How to measure and improve trial conversion Track trial cohorts and calculate conversion as paying accounts ÷ trial-starting accounts for each cohort. Segment conversion by acquisition source, use case, and product surface; then run targeted experiments on messaging, onboarding flows, and pricing to improve weak segments. ## How Skene supports better trial conversion Skene helps you define and instrument activation milestones that sit at the heart of a successful trial, so you can see which users are actually getting value. By connecting journeys to analytics, Skene lets you correlate specific onboarding changes with improvements in trial-to-paid conversion over time. ## Implementation notes - Choose a trial length that gives typical users enough time to reach activation, not just enough time to explore menus. - Design your trial journey around one or two clear outcomes; avoid overwhelming users with every feature during a short window. - Send well-timed emails throughout the trial—do not wait until the last day to engage. ## FAQ Q: What is a good trial-to-paid conversion rate? A: Benchmarks vary widely. Opt-in trials (no credit card required) typically see 2–10% conversion. Opt-out trials (credit card required upfront) often see 25–60%. B2B products with higher ACVs may see lower volume but higher-value conversions. Focus on improving your baseline rather than hitting a universal benchmark. Q: Should I require a credit card for the trial? A: It depends on your goals. Credit card trials have higher conversion rates but lower trial starts. No-card trials have lower conversion but higher trial volume. If your product has strong activation, no-card trials let more users experience value. If activation is hard, requiring a card filters for more committed users. Q: How can I improve trial-to-paid conversion? A: Focus on getting users to activation quickly. Personalize the trial experience based on use case. Send timely, relevant emails that guide users to value. Make pricing clear and the upgrade path frictionless. Follow up with users who activated but did not convert to understand their hesitation. --- # Net revenue retention (NRR) Source: https://www.skene.ai/resources/glossary/net-revenue-retention-nrr Category: Revenue Also known as: Net dollar retention, NDR, NRR, Dollar retention, Revenue retention A revenue metric that captures how much recurring revenue you retain and expand from existing customers over a period, including churn and expansion. Net revenue retention (NRR) measures how your existing customers grow or shrink over time. It looks at starting recurring revenue from a cohort, subtracts churn and contraction, and adds expansion from upsells and cross-sells. In strong PLG companies, NRR above 100% signals that product usage is driving enough expansion to more than offset churn. ## Definition and formula Net revenue retention (NRR) tells you how your recurring revenue from existing customers changes over a period, usually a month, quarter, or year. A common formula is: (Starting MRR + Expansion MRR – Churned MRR – Contraction MRR) ÷ Starting MRR. ## How to interpret NRR in PLG NRR above 100% means your existing customers are growing their spend over time; NRR below 100% means churn and downgrades outweigh expansions. High NRR is often a hallmark of mature PLG companies where product usage naturally leads to more seats, more usage, or higher-value plans. ## Levers to improve net revenue retention Increase expansion by tying pricing to usage or outcomes that grow as customers succeed (e.g., seats, transactions, active projects). Reduce churn by improving activation, onboarding journeys, and feature adoption so more customers reach and maintain value. ## How Skene influences NRR Skene measures completion of key journeys and milestones that are often precursors to expansion events, such as adding users, enabling integrations, or rolling out to new teams. By making these product signals visible, Skene helps you identify at-risk accounts early and design expansion plays rooted in real product usage. ## Implementation notes - Always analyze NRR by segment (e.g., by plan, industry, or company size) to understand where PLG is strongest or weakest. - Pair NRR with leading indicators like activation rate, time-to-value, and feature adoption to move from lagging to leading insights. - Calculate NRR monthly for operational tracking, but report it annually for benchmarking (monthly NRR annualized can be misleading). ## FAQ Q: What is a good NRR for SaaS companies? A: Good NRR varies by segment. SMB-focused companies typically see 90–100%. Mid-market companies often achieve 100–120%. Enterprise companies may reach 120–150%. The best PLG companies (Slack, Datadog, Snowflake) have historically exceeded 130%, meaning existing customers grow 30%+ year-over-year. Q: How is NRR different from gross revenue retention (GRR)? A: GRR only measures revenue lost (churn and contraction) without including expansion. GRR is always ≤100%. NRR includes both losses and expansion, so it can exceed 100%. GRR tells you about customer satisfaction and retention; NRR tells you about your overall growth from existing customers. Q: How can I improve my NRR? A: NRR improves by reducing churn and increasing expansion. Reduce churn by improving activation, onboarding, and ongoing engagement. Increase expansion by designing pricing that grows with customer success (usage-based, per-seat) and by helping customers adopt more of your product. --- # Monthly recurring revenue (MRR) Source: https://www.skene.ai/resources/glossary/monthly-recurring-revenue-mrr Category: Revenue Also known as: MRR, Monthly revenue, Recurring revenue The predictable revenue a subscription business earns each month from active customers. Monthly recurring revenue (MRR) is the predictable revenue a subscription business earns each month from active customers. It normalizes different contract terms (monthly, annual, multi-year) into a single monthly figure, making it easier to track growth, forecast revenue, and compare performance over time. In PLG, MRR is a lagging indicator that reflects the cumulative impact of acquisition, activation, retention, and expansion. ## Definition MRR is the sum of all recurring revenue from active subscriptions, normalized to a monthly amount. Annual contracts are divided by 12; quarterly contracts by 3. One-time fees and usage overages may or may not be included depending on your definition. ## Components of MRR New MRR: Revenue from new customers acquired this month. Expansion MRR: Additional revenue from existing customers (upgrades, add-ons, seat additions). Contraction MRR: Lost revenue from downgrades. Churned MRR: Lost revenue from cancelled subscriptions. ## How to calculate MRR Sum all active subscriptions, normalizing annual contracts to monthly equivalents. Net New MRR = New MRR + Expansion MRR - Contraction MRR - Churned MRR. Track each component separately to understand what is driving growth or decline. ## MRR in a PLG context In PLG, MRR growth is often driven more by expansion than new logos, since customers start small and grow. Tracking MRR by acquisition source helps you understand which PLG channels are most valuable. ## Implementation notes - Be consistent about what you include in MRR—document your definition and stick to it. - Track MRR components (new, expansion, contraction, churn) separately to understand growth drivers. - Use committed MRR (signed contracts) for forecasting; recognized MRR for financial reporting. ## FAQ Q: Should I include one-time fees in MRR? A: Generally, no. MRR should reflect predictable recurring revenue. One-time fees (setup, implementation, professional services) are typically tracked separately. Including them inflates MRR and makes month-over-month comparisons misleading. Q: How do I handle annual contracts in MRR? A: Divide annual contract value by 12 to get the monthly equivalent. A $12,000 annual contract contributes $1,000 MRR. This normalizes different contract terms for comparison. Q: What is the difference between MRR and ARR? A: ARR (Annual Recurring Revenue) is simply MRR × 12. Companies report whichever is more relevant—MRR for monthly billing cycles and fast-moving metrics, ARR for annual planning and enterprise contracts. --- # Annual recurring revenue (ARR) Source: https://www.skene.ai/resources/glossary/annual-recurring-revenue-arr Category: Revenue Also known as: ARR, Annual revenue, Annualized recurring revenue The annualized value of recurring subscription revenue, used for long-term planning and valuation. Annual recurring revenue (ARR) is the annualized value of recurring subscription revenue, calculated as MRR × 12. It provides a longer-term view of revenue that is useful for annual planning, investor reporting, and company valuation. In PLG, ARR growth is driven by a combination of new customer acquisition, expansion within existing accounts, and strong retention. ## Definition ARR is MRR multiplied by 12, representing the annual value of your current recurring revenue. It assumes current MRR continues for a full year—it is a snapshot, not a forecast. ## When to use ARR vs MRR Use ARR for annual planning, board reporting, and fundraising conversations. Use MRR for month-to-month operational tracking and identifying short-term trends. Enterprise-focused companies often emphasize ARR; SMB-focused companies may emphasize MRR. ## ARR milestones in SaaS Common milestones: $1M ARR (product-market fit signal), $10M ARR (scaling stage), $100M ARR (growth stage). ARR growth rate matters more than absolute ARR for early-stage companies. ## Implementation notes - Use ARR for external reporting and MRR for internal operations. - Be clear about whether ARR includes committed (signed) or recognized revenue. - Track ARR growth rate (year-over-year) as a key indicator of business health. ## FAQ Q: How do I calculate ARR? A: ARR = MRR × 12. If your MRR is $100,000, your ARR is $1,200,000. For annual contracts, use the full contract value directly. Q: Is ARR the same as revenue? A: No. ARR is a snapshot of recurring revenue potential, not recognized revenue. Actual revenue depends on when contracts are signed, recognized, and collected. For GAAP reporting, use recognized revenue. Q: What is a good ARR growth rate? A: It depends on stage. Early-stage companies (pre-$10M ARR) often target 2–3x growth. Growth-stage companies ($10M–$100M ARR) target 50–100% annual growth. Later-stage companies may target 20–40%. --- # Churn rate Source: https://www.skene.ai/resources/glossary/churn-rate Category: Retention Also known as: Customer churn, Logo churn, Revenue churn The percentage of customers or revenue lost over a given period. Churn rate measures the percentage of customers or revenue lost over a given period. In PLG, churn is often tied to activation—users who never reach value are most likely to churn. ## Definition Churn rate is the percentage of customers or revenue lost, typically measured monthly or annually. ## Types of churn Voluntary churn: Customer actively cancels. Involuntary churn: Payment failure. ## FAQ Q: What is a good churn rate? A: SMB: 3–7% monthly. Mid-market: 1–2%. Enterprise: less than 1%. --- # Customer lifetime value (LTV) Source: https://www.skene.ai/resources/glossary/customer-lifetime-value-ltv Category: Revenue Also known as: LTV, CLV, Lifetime value The total revenue a customer generates over their relationship with your business. LTV estimates total revenue from a customer over their lifetime. LTV:CAC ratio should be at least 3:1 for healthy unit economics. ## Definition LTV = ARPA ÷ Monthly Churn Rate. ## LTV:CAC ratio A ratio of 3:1 or higher is typically healthy for SaaS. --- # Customer acquisition cost (CAC) Source: https://www.skene.ai/resources/glossary/customer-acquisition-cost-cac Category: Acquisition Also known as: CAC, Acquisition cost, CPA The total cost to acquire a new paying customer. CAC is total sales and marketing cost divided by new customers acquired. PLG often has lower CAC than sales-led models. ## Definition CAC = (Sales + Marketing Costs) ÷ New Customers. ## CAC payback Healthy SaaS targets 12–18 months or less. --- # Land and expand Source: https://www.skene.ai/resources/glossary/land-and-expand Category: Funnels Also known as: Land & expand, Bottom-up sales A strategy where you acquire customers small and grow revenue through expansion. Land and expand starts with a small initial deal and grows through upsells and seat expansion. In PLG, this happens organically. ## Definition Land: Acquire with small deal. Expand: Grow revenue over time. ## In PLG The land is often free trial or freemium. Expand happens as usage grows. --- # Reverse trial Source: https://www.skene.ai/resources/glossary/reverse-trial Category: Pricing Also known as: Opt-out trial, Premium-first trial Users start with full premium access, then transition to free if they do not convert. Reverse trials show users what they will lose rather than gain, leveraging loss aversion psychology. ## Definition Users start with premium, downgrade to free if they do not pay. ## Why it works Loss aversion: People avoid losing what they have. --- # Growth loops Source: https://www.skene.ai/resources/glossary/growth-loops Category: Foundations Also known as: Viral loops, Flywheel, Growth flywheel Self-reinforcing cycles where user actions drive acquisition of new users. Growth loops are closed systems where the output of each cycle feeds back as input for the next, creating compounding growth over time. Unlike traditional funnels that leak users at every stage, growth loops recycle value so that every new user, piece of content, or data point makes the system stronger. The most durable product-led companies are built on one or more interlocking growth loops spanning viral, content, paid, and AI-powered mechanisms. ## Definition A growth loop is a closed, self-reinforcing system in which user actions produce outputs that become inputs for the next cycle. Each completed loop generates more of the resource that powers growth, whether that resource is new users, content, revenue, or data. The concept was popularized by Reforge as an alternative to the AARRR funnel model. Where funnels treat growth as a linear sequence of stages, growth loops treat it as a circular, compounding engine. ## Types of growth loops Viral loops: Existing users invite or expose new users to the product. Examples include referral programs, shared workspaces, and social features that create external visibility. Slack grows virally when a team member invites a colleague, who then invites their own team. Content loops: Users or the product itself generate content that ranks in search or spreads on social platforms, attracting new users. Notion templates, Stack Overflow answers, and Figma community files are all content loops. Paid loops: Revenue from existing customers funds acquisition of new customers through advertising or partnerships. The loop is sustainable when customer lifetime value exceeds acquisition cost with enough margin to reinvest. Data and AI loops: User activity generates data that improves the product (through better recommendations, models, or personalization), which attracts more users who generate more data. This is increasingly important as AI-native products use usage data to fine-tune models and deliver better outcomes. ## How growth loops differ from funnels Funnels are linear: they move users from awareness to activation to retention to revenue. Each stage loses a percentage of users, and the only way to grow faster is to pour more users into the top. Funnels do not explain where new users come from. Growth loops are circular: the output of each stage feeds back into the input. A user who activates creates something (an invite, a piece of content, revenue, or data) that brings in the next user. This means growth can compound rather than requiring constant top-of-funnel investment. In practice, most teams still use funnel metrics to measure conversion at each step, but design their strategy around the loop that powers sustainable growth. ## Designing growth loops Start by mapping the natural actions your best users already take. Look for moments where user activity creates something shareable, visible, or valuable to others. The strongest loops are built on behaviors users would do anyway, not artificial incentives. Define the four parts of the loop: the input (new user or trigger), the action (what the user does), the output (what their action produces), and the reinvestment mechanism (how the output becomes a new input). If any part is weak, the loop will not sustain itself. Focus on one primary loop first. Trying to build multiple loops simultaneously dilutes effort and makes it hard to measure what is working. Layer additional loops once the first is proven. ## AI-powered growth loops AI-native products have a unique opportunity to build data-driven growth loops. As more users interact with the product, the underlying models improve, which makes the product more valuable, which attracts more users who generate more data. This creates a defensible moat that competitors cannot easily replicate. Examples include recommendation engines that improve with more usage data, AI assistants that learn from user corrections, and personalization systems that get better as they observe more behavior patterns. The key to an AI growth loop is ensuring users can feel the improvement. If the model gets better but users cannot perceive the difference, the loop does not close. Make improvements visible through better suggestions, faster results, or more relevant outputs. ## Measuring loop effectiveness The most important metric for any growth loop is the loop multiplier: how many new inputs does each cycle produce? A viral loop with a multiplier above 1.0 means each user brings in more than one new user, creating exponential growth. Most loops operate below 1.0 and still work well when combined with other acquisition channels. Track cycle time (how long one full loop takes), conversion rate at each step in the loop, and the quality of users or outputs each cycle produces. A fast loop with low-quality output can actually be worse than a slow loop with high-quality output. Use cohort analysis to measure whether loop efficiency improves over time. Healthy loops get more efficient as the product matures because of network effects, more content, or better data. ## Common growth loop mistakes Building loops that rely on artificial incentives rather than natural user behavior. Referral bonuses can kickstart a loop, but if the product does not deliver enough value to sustain organic sharing, the loop collapses when incentives are removed. Optimizing for loop volume without considering quality. A content loop that generates thousands of low-quality pages may drive traffic but hurt brand perception and fail to convert visitors into users. Ignoring the reinvestment mechanism. Many teams design great user experiences but never close the loop by connecting user output back to new user acquisition. The output must systematically become an input. Trying to force a loop type that does not fit the product. Not every product has viral potential. Some products are better suited to content or paid loops. Choose the loop type that matches your product natural behavior. ## Examples - Slack viral growth loop: Slack operates a viral growth loop. A user joins a workspace, finds it valuable, and invites colleagues from other teams or companies. Those colleagues create their own workspaces and invite their own contacts. Each workspace that reaches activation becomes a source of new workspace creation. - HubSpot content-led SEO loop: HubSpot runs a content-led SEO growth loop. Their team and users produce marketing content that ranks in search. New visitors discover HubSpot through this content, sign up for free tools, and some become paying customers. Revenue funds more content production, continuing the cycle. - Spotify data and AI loop: Spotify uses a data and AI growth loop. As users listen to music, their behavior trains recommendation algorithms. Better recommendations increase listening time and satisfaction, which attracts more users and generates more behavioral data for further model improvement. ## Implementation notes - Map your existing user behavior before designing a loop. The best loops amplify what users already do naturally. - Measure the loop multiplier and cycle time from day one. These two numbers tell you whether the loop is viable and how fast it can compound. - Start with one primary loop type that matches your product natural strengths: viral for collaboration tools, content for knowledge platforms, data/AI for personalization products. - Close the loop explicitly. Ensure there is a clear, measurable mechanism that converts user output into new user input. If this step is manual or unreliable, the loop will not scale. - Review loop health monthly using cohort analysis. A healthy loop should show stable or improving efficiency over time. ## FAQ Q: What is the difference between a growth loop and a viral loop? A: A viral loop is one type of growth loop where existing users directly bring in new users through invitations, sharing, or collaboration. Growth loops is the broader category that also includes content loops, paid loops, and data/AI loops. Q: Can a product have multiple growth loops running at the same time? A: Yes, and most successful PLG companies do. However, it is best to prove and optimize one primary loop before layering on additional loops. Multiple loops can reinforce each other when they share inputs or outputs. Q: How do growth loops work in AI products? A: AI products can build data-driven growth loops where user interactions generate training data that improves the model, making the product more valuable, which attracts more users who generate more data. The key is making the improvement perceptible to users. Q: What is a good loop multiplier? A: A multiplier above 1.0 means the loop is self-sustaining and will drive exponential growth. Most loops operate between 0.2 and 0.8, which still provides meaningful growth when combined with other channels. Even a 0.5 multiplier means every two users bring in one more. Q: How long does it take to build an effective growth loop? A: Designing the loop can happen in weeks, but proving it works and optimizing it typically takes 3 to 6 months of iteration. The biggest factor is cycle time: loops with shorter cycles can be tested and improved faster. --- # DAU/MAU ratio Source: https://www.skene.ai/resources/glossary/dau-mau-ratio Category: Engagement Also known as: Stickiness ratio, User stickiness Ratio of daily to monthly active users, measuring stickiness. DAU/MAU measures how often users return. 50%+ is very sticky. 20–50% is moderate. ## Definition DAU/MAU = Daily Active Users ÷ Monthly Active Users. ## Interpretation Higher ratio = more habitual usage = harder to churn. --- # Product tour Source: https://www.skene.ai/resources/glossary/product-tour Category: Onboarding Also known as: Guided tour, Walkthrough A guided walkthrough that introduces users to key features. Product tours accelerate time-to-value by showing the fastest path to activation. ## Definition Step-by-step guided experience walking users through key features. ## Best practices Keep short. Focus on value. Make skippable. --- # Usage-based pricing Source: https://www.skene.ai/resources/glossary/usage-based-pricing Category: Pricing Also known as: Consumption-based pricing, Pay-as-you-go, UBP Pricing where customers pay based on how much they use the product. Usage-based pricing aligns cost with value. Customers pay more as they get more value. ## Definition Charges based on consumption: API calls, data, active users. ## Benefits Low barrier to entry. Value alignment. Natural expansion. --- # User segmentation Source: https://www.skene.ai/resources/glossary/user-segmentation Category: Signals Also known as: Customer segmentation, Cohort segmentation Dividing users into groups based on shared characteristics or behaviors. Segmentation helps understand which users are most valuable and tailor experiences. ## Definition Grouping users by attributes or behaviors. ## Common segments By persona. By company size. By use case. By engagement level. --- # Payback period Source: https://www.skene.ai/resources/glossary/payback-period Category: Metrics Also known as: CAC payback, CAC payback period The time it takes to recover the cost of acquiring a customer. CAC payback period = CAC ÷ (Monthly Revenue × Gross Margin). Target 12–18 months or less. ## Definition Months until a customer pays back their acquisition cost. ## Benchmarks 12–18 months is healthy. PLG often achieves faster payback. --- # Product-market fit Source: https://www.skene.ai/resources/glossary/product-market-fit Category: Foundations Also known as: PMF, Product/market fit When your product satisfies strong market demand. Product-market fit means customers actively want your product. Signs: organic growth, strong retention, word-of-mouth. ## Definition A state where your product meets real market demand. ## Signals of PMF Strong retention. Organic growth. Low churn. --- # Reduce time-to-value for new accounts Source: https://www.skene.ai/resources/playbooks/reduce-time-to-value Job to be done: Shorten the time from signup to first meaningful outcome. Tags: Onboarding, Activation, Time-to-value Last updated: 2026-01-05 Design onboarding, milestones, and instrumentation so new users reach a real outcome in hours instead of weeks. ## Problem context - You see healthy signup volume, but new accounts take a long time to do anything meaningful in the product. - Sales and success teams complain that “users never get past setup”, and you do not have a clear baseline for how long activation takes. - Internal discussions about time-to-value are vague; different teams use different definitions and cannot agree on what “good” looks like. ## What breaks - Conversion from free or trial to paid stalls because users never experience the moment where the product “clicks”. - Support and success spend disproportionate time on basic setup questions instead of higher-leverage work. - Acquisition spend becomes inefficient: each new signup adds to a pool of half-onboarded accounts that rarely activate or expand. ## When this applies - You already have self-serve signup or trials, and at least some users do successfully reach value without heavy human help. - You can identify a concrete activation milestone that correlates with long-term retention or revenue, even if it is imperfect today. - Your product can be set up in days or less; if implementation always takes months and complex integrations, start with a different playbook. - You have access to basic product usage data (even if it is messy) that can be used to approximate time-to-value. ## System approach - Treat time-to-value as a system property: the combination of signup flow, onboarding journeys, defaults, sample data, and guardrails that determine how quickly users hit activation. - Start from a single, explicit activation definition and work backwards to design the minimum set of steps required to reach it for your primary persona. - Instrument that path end-to-end, then iteratively remove, reorder, or automate steps to collapse delays between signup and activation. - Align success, sales, and product on a shared dashboard so everyone reacts to the same TTV and activation metrics instead of local anecdotes. ## Execution steps - Pick one primary journey (for example, “new workspace admin creates first project and connects data”) and write down a crisp activation event for it. - Use existing data to measure current time-to-value: distribution from signup to activation, broken down by segment (plan, role, channel). - Map the actual onboarding path users take today, from signup screens through in-product prompts, docs, and emails; list every mandatory action. - Mark each step as essential, optional, or vanity relative to the activation event; remove or defer as many optional and vanity steps as possible. - Introduce opinionated defaults and sample data so new users can skip configuration work and still reach a meaningful outcome. - Add a visible, outcome-oriented checklist or guide that reflects only the steps required to hit activation for this journey. - Wire analytics events for each checklist step and the activation event; confirm that you can see per-step drop-off and time between steps. - Ship the streamlined journey to a subset of traffic; compare activation rate and time-to-value cohorts against the previous baseline. - Iterate based on what you see: if users stall on a step, either simplify it, add contextual help, or move it after activation. - Once the first journey is reliably faster, document the pattern and apply it to adjacent personas or segments one at a time. ## Metrics to watch - Median time-to-activation for the target journey — Trend down by 20–50% vs current baseline over 4–8 weeks. (Measure in hours or days from signup to first completion of the defined activation event, segmented by acquisition channel and role.) - Signup-to-activation rate for the target journey — Trend up; avoid improvements in TTV that coincide with a lower activation rate. (Track by cohort; ensure that faster journeys do not skip critical steps that matter for retention or revenue.) - Drop-off rate at each onboarding step — Identify and steadily reduce steps with the highest abandonment. (Use this to prioritize UX and copy changes; even small improvements at early steps can compound into large gains in activation.) - D7 and D30 retention for activated users — Stay flat or improve as TTV falls. (If retention degrades while TTV improves, you may be over-optimizing for speed at the expense of meaningful value.) ## Failure modes - Optimizing for a vanity activation definition that does not correlate with retention or revenue, leading to “fast” TTV that does not move the business. - Designing for the median user and ignoring critical segments where setup is structurally harder (for example, enterprise security constraints). - Treating instrumentation as optional and relying on qualitative anecdotes instead of hard data for before/after comparisons. - Trying to collapse every journey at once instead of focusing on one high-impact path where you can prove the model. --- # Improve user activation rates Source: https://www.skene.ai/resources/playbooks/improve-user-activation Job to be done: Increase the share of new users and accounts that reach a meaningful activation milestone within a defined window. Tags: Activation, Onboarding, Metrics Last updated: 2026-01-05 Define a precise activation event, redesign onboarding around it, and instrument progress so you can systematically lift activation. ## Problem context - You have steady signup or install volume, but a small fraction of users ever become active or stick around. - Internal metrics talk about “actives” or “engaged users”, but there is no single activation definition that everyone uses. - Onboarding flows have grown organically over time and now mix demos, configuration, and product education without a clear destination. ## What breaks - Sales and success teams chase leads that have never reached value, resulting in long cycles with low close rates. - Product teams struggle to interpret experiments because they do not know whether changes helped more users reach a canonical milestone. - Dashboards look healthy at the top of the funnel, but cohort retention and expansion revenue lag expectations. ## When this applies - You already have some instrumentation and can see at least basic product events, even if they are noisy. - Your product has repeatable use cases where a clear activation milestone exists (for example, shipping a change, sending a message, or connecting data). - You are willing to trade breadth of onboarding for depth of one or two high-confidence journeys. ## System approach - Anchor your PLG motion around a single activation definition per primary persona and treat that definition as a product artifact, not just an analytics query. - Design onboarding and early surface areas as a guided path toward activation, not as a tour of features or settings. - Use cohort-based activation metrics to evaluate changes; avoid judging success from raw signup or traffic numbers alone. ## Execution steps - Run a data-backed exercise to propose candidate activation events by analyzing what your best, most retained customers did in their first days or weeks. - Stress-test each candidate with product, sales, and success: is it observable, does it happen early enough, and does it strongly correlate with long-term value? - Select one activation definition per primary persona and document it in writing, including examples and non-examples. - Map the current path from signup to activation; mark where users first encounter the capability needed to complete the activation event. - Redesign onboarding so that reaching activation is the primary goal: remove distractions, defer advanced configuration, and route users into a single, opinionated path. - Add instrumentation for the activation event at both user and account levels, plus for key steps leading up to it. - Introduce a visible progress indicator (for example, a role-specific checklist or journey) that makes activation feel like a clear, achievable outcome. - Ship the new flow to a test cohort; compare activation rate and time-to-activation against historical cohorts over multiple weeks. - Iterate on the highest-drop-off steps using a mix of UX changes, better defaults, and contextual education tied to the job-to-be-done. ## Metrics to watch - User-level activation rate within X days of signup — Trend up consistently across cohorts. (Choose a window (for example, 7 or 14 days) that matches your product’s natural evaluation cycle; track by channel and persona.) - Account-level activation rate for target segments — Increase share of accounts with at least one activated user. (Particularly important for multi-seat or team products where one champion can activate the account on behalf of others.) - Time-to-activation (median and p75) — Trend down or stay flat as activation rate improves. (Watch for cases where activation increases but TTV stretches out, which can hurt conversion from free to paid.) - Post-activation D30 retention — Improve or remain stable as you change onboarding. (If short-term activation gains come with worse retention, revisit your activation definition or the quality of the experience after activation.) ## Failure modes - Picking an activation definition that is too late (for example, full team rollout) and therefore rarely happens during trials or early usage. - Overfitting onboarding to a narrow persona and unintentionally degrading the experience for other valuable segments. - Treating activation as a one-time project instead of a metric you revisit quarterly as the product and customer base evolve. - Focusing on surface-level UX tweaks without addressing deeper issues like product-market fit or missing capabilities required to reach value. --- # Onboarding without mature event instrumentation Source: https://www.skene.ai/resources/playbooks/onboarding-without-events Job to be done: Design and improve onboarding journeys when product analytics are missing, incomplete, or unreliable. Tags: Onboarding, Instrumentation, Foundations Last updated: 2026-01-05 Use proxy signals, qualitative insight, and a staged instrumentation roadmap to move from guesses to measurable onboarding. ## Problem context - You want to improve onboarding and activation but do not trust your event data, or do not have any product analytics set up. - Teams rely on support tickets, anecdotal feedback, or gut feel to judge whether onboarding changes are working. - Engineering is wary of “yet another tracking project”, and no one owns the instrumentation layer end-to-end. ## What breaks - Onboarding experiments become hard to interpret; teams ship changes and move on without clear evidence of impact. - Leadership loses confidence in product metrics because numbers conflict across tools or cannot be reproduced. - PLG initiatives stall since core metrics like activation, time-to-value, and retention cannot be measured reliably. ## When this applies - You have little or no structured product analytics today, or existing events are inconsistent and undocumented. - You can access at least some operational data (for example, signups, workspaces created, billing events) from your application database or backend logs. - The team is willing to invest in a staged instrumentation effort in parallel with onboarding improvements. ## System approach - Separate the problem into two tracks: redesigning onboarding flows using qualitative insight and low-cost proxies, and building a minimum viable instrumentation backbone in parallel. - Start by defining a small set of events tied to real jobs-to-be-done and activation, not an exhaustive catalog of every click. - Use simple, durable mechanisms (for example, backend events or a small SDK) that are easy to maintain as the product evolves. ## Execution steps - Document your current onboarding flows in plain language: entry points, key screens, and the outcomes you want new users to reach. - Interview a small sample of recent signups (both successful and churned) to understand where they got stuck or confused during onboarding. - Define a first-pass activation milestone and TTV goal, even if you cannot measure them precisely yet. - Identify 3–5 proxy signals you can measure today without a full analytics setup (for example, projects created, workspaces with any activity, first invoice paid). - Create a minimum event schema focused on onboarding and activation: signup, first key action, activation event, and a handful of high-leverage steps. - Work with engineering to implement these events in the most stable layer you can (often the backend or a central events module). - Stand up a simple reporting path—this can be a warehouse table, a basic dashboard, or even CSV exports—that lets you see funnel and cohort views for these events. - Redesign one onboarding flow using qualitative findings: remove obvious blockers, clarify copy, and guide users toward the proxy signals you can observe. - Compare cohorts before and after the new flow using your proxy metrics while you continue improving event coverage. - Iteratively expand the event schema and dashboards so you can graduate from proxies to direct measurement of activation and time-to-value. ## Metrics to watch - Coverage of core onboarding events — Reach and maintain >95% of relevant flows emitting events. (Track how often expected events are missing in logs for known onboarding paths; missing events are instrumentation bugs.) - Proxy activation rate based on available signals — Trend up as you improve flows, with definitions documented and versioned. (Examples include “workspaces with at least one project created” or “accounts with any active usage in first 7 days”.) - Median time from signup to first proxy success event — Trend down as onboarding simplifies. (This gives an approximate view of time-to-value before you have full activation instrumentation.) - Share of code paths covered by the new events module — Increase over time until core journeys are consistently instrumented. (Measure by scanning routes or services that handle onboarding-related operations.) ## Failure modes - Trying to recreate a full, detailed analytics taxonomy from scratch instead of focusing on a small, high-impact event set. - Letting every team add their own events and naming conventions ad hoc, leading to future data debt. - Treating instrumentation as a one-off project rather than part of the regular development process and code review. - Waiting for “perfect data” before making any improvements to onboarding flows users are clearly struggling with today. --- # Design self-serve retention loops Source: https://www.skene.ai/resources/playbooks/self-serve-retention Job to be done: Keep activated users coming back and growing usage without heavy human intervention. Tags: Retention, Loops, Expansion Last updated: 2026-01-05 Move beyond one-time activation by designing product loops that create habitual, compounding usage in self-serve accounts. ## Problem context - You see reasonable activation, but many users and accounts go dormant after a few days or weeks. - Retention charts show steep drop-offs after the first few cohorts, and sales teams mostly work with a small subset of high-touch accounts. - Product discussions focus heavily on new features rather than strengthening the loops that keep existing users engaged. ## What breaks - Top-of-funnel efforts become expensive because you constantly need new signups to replace churned users. - Self-serve revenue underperforms; most growth comes from a small number of sales-driven deals. - It becomes difficult to justify PLG investments because retention metrics do not reflect the value of acquired users. ## When this applies - You already have a clear activation definition and a non-trivial number of users reach it. - Your product has natural repeatable workflows (for example, reviewing analytics, shipping changes, or managing projects) that could support habits. - You can see at least basic retention or usage metrics over time, even if they are coarse. ## System approach - Shift the mental model from funnels to loops: focus on the actions that should repeat weekly or monthly and what makes each repetition more valuable. - Identify one or two core loops—such as collaboration, data, or habit loops—and design triggers, rewards, and investments around them. - Align product, success, and marketing so that campaigns, in-product prompts, and lifecycle messaging reinforce the same loops. ## Execution steps - Define what “healthy ongoing usage” means for your product in behavioral terms (for example, “runs at least one workflow per week” or “views key dashboard monthly”). - Plot retention cohorts for activated users and identify where curves flatten; note differences by segment or use case. - Map existing product behaviors that could form loops: recurring tasks, notifications, collaboration, or accumulating configuration/data. - Choose one core loop to strengthen first (for example, “weekly review of analytics dashboard”) and write down the trigger → action → reward → investment pattern. - Audit your product to see where that loop currently breaks: weak or missing triggers, unclear rewards, or no lasting investment. - Design concrete product changes to reinforce the loop: better scheduling, saved views, collaborative comments, or progress indicators tied to real work. - Add instrumentation to measure loop participation (for example, “dashboards viewed per active account per week”) and connect it to retention and expansion metrics. - Ship changes to a subset of users or a specific segment; monitor retention curves and loop metrics over multiple periods. - Once the first loop shows improvement, document the pattern and apply it to adjacent loops or segments. ## Metrics to watch - D7, D30, and D90 retention for activated users — Trend up over successive cohorts. (Track by persona and plan; small changes in later-period retention can have large revenue impacts.) - Frequency of core loop actions per active account — Increase and stabilize over time. (Examples: dashboards viewed per week, workflows run per month, or collaborative sessions per account.) - Expansion revenue or seat growth from self-serve accounts — Trend up as loops deepen. (Measure upgrades, additional seats, or usage-based overages driven primarily by product usage rather than human outreach.) - Cohort-based net revenue retention (NRR) for self-serve segments — Move toward or above 100% for target cohorts. (Even small improvements in NRR for self-serve tiers can materially change long-term unit economics.) ## Failure modes - Treating retention purely as a notification or email problem instead of addressing whether the product actually fits into repeatable workflows. - Trying to design many loops at once, which dilutes focus and makes it hard to see which changes affected retention. - Ignoring the qualitative reasons users stop coming back (for example, lack of trust, unclear value, or competing tools). - Assuming loops that work for high-touch, enterprise accounts will automatically transfer to self-serve segments. --- # Run PLG with a small product and success team Source: https://www.skene.ai/resources/playbooks/plg-with-small-teams Job to be done: Design a sustainable PLG motion when you have limited engineering, data, and customer success capacity. Tags: Strategy, Resourcing, Foundations Last updated: 2026-01-05 Prioritize a narrow set of journeys, metrics, and automations so a small team can operate PLG without burning out. ## Problem context - You want the benefits of PLG but only have a handful of engineers and no dedicated analytics or growth team. - Existing guidance assumes separate teams for product, growth, data, and success that you simply do not have yet. - The team is already stretched thin shipping core product features, and PLG work feels like “extra” on top. ## What breaks - Attempts to copy complex PLG stacks from larger companies lead to half-finished experiments and brittle infrastructure. - Engineers burn out maintaining bespoke onboarding flows, checklists, and analytics wiring that quickly go stale. - Leadership loses confidence in PLG because early efforts create overhead without clear, measurable results. ## When this applies - You are an early-stage team or a small product line inside a larger company with limited dedicated growth resources. - You have some self-serve or trial motion, even if it is basic, and want to improve it without hiring a large team first. - You are comfortable making explicit tradeoffs about which segments and journeys you will not optimize yet. ## System approach - Treat PLG as an operating system that grows in layers: start with a minimal viable set of journeys, metrics, and automations that your current team can maintain. - Aggressively reuse infrastructure and patterns instead of building bespoke flows for every use case. - Align roadmap, onboarding, and instrumentation work so each release moves both the product and the PLG system forward. ## Execution steps - Choose a single primary segment and job-to-be-done that PLG will serve in the next 3–6 months; write down what success looks like in plain language. - Define one activation milestone and a small set of supporting metrics (for example, activation rate, time-to-activation, and D30 retention) for that segment. - Audit your current onboarding and product surfaces; identify bespoke flows, tours, or dashboards that could be replaced with simpler, generic patterns. - Create a minimal PLG roadmap that pairs each quarter’s product work with one PLG improvement (for example, “new feature + updated onboarding journey + metrics”). - Standardize how you instrument events and journeys so adding new flows is a repeatable, low-friction task instead of a custom project. - Automate the highest-leverage success and sales motions first (for example, lifecycle emails triggered by key events, in-product prompts for upgrades) and explicitly postpone low-impact manual playbooks. - Establish a lightweight review cadence (for example, monthly) where you look at the same PLG dashboard and decide on one or two changes, not ten. - As the motion starts to work and the team grows, layer in additional segments or jobs-to-be-done using the same patterns. ## Metrics to watch - Activation rate and time-to-activation for the chosen segment — Trend in the right direction with each quarterly iteration. (These are your primary success criteria; avoid adding many more until these are healthy.) - D30 retention for activated users in the target segment — Trend up or remain stable as you experiment. (Signals whether the minimal PLG system is creating durable value or just short-term engagement.) - Engineering hours spent on PLG infrastructure vs feature work — Stay within an explicit budget (for example, 10–20% of capacity). (Track roughly so PLG does not silently consume the entire roadmap.) - Number of PLG patterns reused across features — Increase reuse over time. (Examples include standardized journeys, checklists, or upgrade prompts that can be applied without new plumbing.) ## Failure modes - Trying to operate a complex experimentation program (many variants, short cycles) without the data or traffic to support it. - Spreading limited engineering capacity across too many PLG surfaces—multiple personas, segments, and plans—so none of them are great. - Letting bespoke onboarding and analytics code proliferate instead of consolidating on shared components and patterns. - Deferring simple automation (for example, lifecycle messages or in-product prompts) because you are chasing advanced personalization you cannot yet support. --- # Route accounts to human touch using product signals Source: https://www.skene.ai/resources/playbooks/route-accounts-to-human-touch Job to be done: Decide when automation is no longer enough and human intervention actually helps. Tags: Routing, Activation, Expansion Last updated: 2026-01-05 Create routing rules that use product signals to decide when sales or success should step in, instead of treating all engaged accounts the same. ## Problem context - In many PLG motions every engaged account looks the same: the product sends the same prompts and campaigns regardless of real potential or risk. - Sales and success teams either chase every active account or none of them, because there is no shared, product-based definition of when humans add leverage. - This leads to noisy outreach, missed expansion opportunities, and accounts that stall because no one intervenes when the product alone is not enough. ## What breaks - High intention accounts never get timely human help, so they churn or buy a competitor after a promising start. - Sales and success burn time on low value accounts that only ever needed self-serve support, reducing capacity for the right accounts. - The organization loses trust in PLG signals because there is no clear connection between product behavior and revenue outcomes. ## When this applies - You have both self-serve and human motions, even if the human motion is small, and you want them to complement rather than fight each other. - You capture at least a minimal set of product usage signals such as activation events, plan limits, or collaboration behaviors. - Sales or success regularly complain that they hear about promising accounts too late or not at all. ## System approach - Treat routing as a system: clear product signals, explicit thresholds, and playbooks for what happens when those thresholds are crossed. - Start from concrete examples of successful expansions and painful misses, and reverse engineer which product behaviors should have triggered human touch. - Define a small set of product-qualified lead and product-qualified expansion conditions, and wire them into your CRM and workflows. ## Execution steps - List your existing product signals: activation events, feature usage, plan limits, invitations, billing events, and support interactions. - Analyze a sample of accounts that expanded successfully and those that churned after early engagement; identify behaviors that reliably precede each outcome. - Propose a first version of routing rules, such as “PQL when activation + thresholded usage + collaboration” or “risk alert when usage collapses after activation”. - Align product, sales, and success on these definitions; write them down in plain language and make them visible in your CRM or documentation. - Implement the routing logic in your data layer or event processor so product signals automatically tag or score accounts in your CRM. - Design specific human plays for each route, such as consultative onboarding for high potential accounts or light touch nudges for risk accounts. - Start with conservative thresholds and a narrow segment so you do not overwhelm humans; monitor volume and outcomes for each routed path. - Iterate monthly on the rules and thresholds based on conversion, win rates, and capacity feedback from the teams running the plays. ## Metrics to watch - Conversion rate from routed PQLs to expansion or paid plans — Trend up as routing rules and plays improve. (Measure separately for accounts that received human touch versus those that stayed fully self-serve.) - Time from qualifying product signal to first human touch — Trend down toward a target window (for example, 24–72 hours). (Long delays after a strong product signal erode intent and can make outreach feel random.) - Share of human capacity spent on high potential accounts — Increase the proportion of time spent on routed PQLs versus unqualified accounts. (Track as a rough split in your CRM or task system so you can see if routing is focusing the team.) - Churn or shrinkage among accounts that met strong-product-signal thresholds but never received human touch — Trend down as routing improves. (This is the “silent miss” rate; it should fall over time as routing and plays mature.) ## Failure modes - Defining routing rules that are too broad, overwhelming sales and success with noisy PQLs that do not actually convert. - Relying on vanity signals such as simple login counts instead of behaviors that correlate with durable value and expansion. - Designing routing rules in a spreadsheet without involving the humans who will run the follow up plays. - Letting routing decay as the product changes, so new features and motions never emit the right signals. --- # Detect false activation and shallow success Source: https://www.skene.ai/resources/playbooks/detect-false-activation Job to be done: Identify users who hit activation events but never reach durable value. Tags: Metrics, Failure modes, Activation Last updated: 2026-01-05 Stress-test your activation definition and metrics so you can distinguish real customer success from shallow, one-off wins. ## Problem context - Activation dashboards often look healthy, but retention and expansion tell a different story. - Teams celebrate hitting activation targets even when many “activated” users never build habits or contribute to revenue. - Without a way to detect false activation, you can scale a motion that optimizes only the optics of success. ## What breaks - Product and growth teams over-invest in flows that produce shallow success instead of durable outcomes. - Leadership loses confidence in PLG metrics because they do not match cohort and revenue reality. - Downstream teams such as sales and success are handed “qualified” accounts that are not truly ready, wasting time and trust. ## When this applies - You already track an activation event and report an activation rate, but cohorts still show steep drop-offs. - You see many accounts hit activation during trials or pilots but very few convert or expand. - You suspect that your activation event is either too easy or not aligned to real value. ## System approach - Treat activation as a hypothesis to be tested against retention, expansion, and qualitative evidence, not as a fixed truth. - Extend your metric model to include second-order checks: behavior after activation, depth of usage, and revenue or renewal outcomes. - Iterate on activation definitions with data and narrative until false positives are rare and clearly understood. ## Execution steps - Define what “durable value” means for your product in behavioral terms, such as consistent weekly usage, key workflows completed, or expansion triggers. - Segment historical users into cohorts based on whether they hit your current activation event and whether they later showed durable value. - Quantify how many activated users fail to reach durable value, and identify common patterns among them (for example, single session success, one project only, no collaboration). - Interview a small sample of shallow-success users to understand what they were trying to do and why they did not continue. - Propose refinements to your activation event such as adding conditions for depth, repetition, or collaboration. - Run side-by-side tracking of the old and new activation definitions for at least one or two cohorts; compare retention and expansion for each definition. - Update dashboards, documentation, and downstream routing logic to use the refined activation definition once you are confident it better reflects durable value. ## Metrics to watch - Share of activated users who meet a durable value threshold (for example D30 activity) — Trend up as activation definitions improve. (Track over cohorts to see whether activation is becoming a stronger predictor of long term success.) - Activation-to-D30 retention rate — Increase or remain stable as you refine activation. (If the rate declines, you may be making activation too easy or misaligned with real value.) - Activation-to-expansion conversion (account level) — Trend up over time. (Measures whether accounts that activate are actually the ones that later expand seats, usage, or plans.) - Proportion of accounts flagged as “shallow success” — Trend down after refinements. (Define shallow success explicitly, for example “activated but no meaningful activity after 14 days”.) ## Failure modes - Treating activation as a purely volume-driven metric and ignoring what happens after users hit the event. - Making activation definitions so strict that almost nobody qualifies, which breaks your ability to run experiments. - Changing activation definitions frequently without backfilling or documenting, making it impossible to compare cohorts over time. - Ignoring qualitative feedback from users who churn quickly after activation because the numbers look “good enough”. --- # Recover stalled but high-potential accounts Source: https://www.skene.ai/resources/playbooks/recover-stalled-high-potential-accounts Job to be done: Re-engage accounts that showed strong early signals but plateaued before expansion or deeper adoption. Tags: Retention, Routing Last updated: 2026-01-05 Use product signals and targeted plays to bring back accounts that looked promising but quietly stalled before they grew. ## Problem context - Many of your most promising accounts do not churn loudly; they simply stop progressing after a strong start. - Dashboards show good early activation and usage, but expansion and long term retention lag because high potential accounts flatten out. - Sales and success teams have limited bandwidth and often focus on net new logos instead of rescuing stalled motion in existing accounts. ## What breaks - Expansion revenue underperforms because high potential accounts never reach the depth of adoption needed to justify upgrades. - Self-serve cohorts look fine in early weeks but decay sharply as stalled accounts quietly disengage. - You lose valuable learning opportunities because no one systematically investigates where high intent accounts get stuck. ## When this applies - You can identify accounts that had strong early signals such as activation, collaboration, or high feature usage. - Your team has at least some capacity for targeted outreach or in product re engagement flows. - You see a non trivial share of high intent accounts that never convert, expand, or renew. ## System approach - Treat stalled account recovery as a structured program, not ad hoc hero work: define what “stalled but high potential” means and design dedicated plays. - Use product and billing signals to detect plateaus early, then route accounts into re engagement paths that match their context. - Integrate learnings from recovery work back into onboarding, activation, and retention loops so the same stalls happen less often. ## Execution steps - Define criteria for “high potential” based on early behavior, such as activation, breadth of feature usage, or collaboration patterns. - Define what “stalled” means in your context, for example a sharp drop in activity, no usage in a set window, or no progress toward expansion triggers. - Build a simple query or dashboard that lists accounts that are both high potential and stalled, and review it regularly with product, sales, and success. - Design 2–3 recovery plays, such as a guided check in from success, a tailored email sequence, or in product prompts offering help with the next step. - Instrument recovery plays so you can see which ones lead to re activation, deeper usage, or expansion versus no change. - Allocate explicit capacity to recovery, even if small; treat it as a recurring slot in success or growth planning. - Review learnings quarterly and feed them back into core journeys, reducing the number of accounts that stall in the same way. ## Metrics to watch - Reactivation rate among stalled high potential accounts — Trend up as recovery plays improve. (Define reactivation clearly, such as returning to a healthy usage pattern or hitting a new milestone.) - Post recovery D30 or D60 retention — Remain strong or improve relative to non stalled cohorts. (Shows whether recovered accounts are truly back on track or just briefly nudged.) - Expansion rate and revenue from recovered accounts — Trend up; should justify human or program investment. (Track separately from net new expansions to understand the specific impact of recovery work.) - Volume of stalled high potential accounts over time — Trend down as upstream journeys improve. (A shrinking pool indicates that onboarding, activation, and retention loops are doing a better job.) ## Failure modes - Treating stalled account recovery as a one time “save” campaign instead of a continuous program. - Over automating recovery with generic emails that do not acknowledge the account’s specific context. - Failing to adjust product or journeys based on what you learn from repeated stalls. - Routing every stalled account to humans without prioritization, overwhelming the team. --- # Design expansion without introducing sales friction Source: https://www.skene.ai/resources/playbooks/expansion-without-sales-friction Job to be done: Grow account value while preserving self-serve momentum. Tags: Expansion, Strategy Last updated: 2026-01-05 Add expansion mechanics that feel natural inside the product so accounts can grow usage and spend without heavy, blocking sales touchpoints. ## Problem context - Many PLG products do a good job of getting users activated and retained, but expansion is left to generic upsell banners or late stage sales conversations. - Heavy handed sales processes introduced too early can break the self-serve experience that made the product attractive in the first place. - As average contract values grow, teams feel pressure to add more approval steps and negotiation, which can slow or stop organic expansion. ## What breaks - Accounts that are ready to grow cannot do so easily on their own and either delay upgrades or move to competitors with smoother paths. - Sales and finance teams become bottlenecks for straightforward expansions that could have been automated. - Product and growth teams lose visibility into where users are hitting the ceiling of current plans because expansion is handled entirely in backchannel deals. ## When this applies - Your product has clear ways accounts can expand usage, such as more seats, higher volumes, or additional features. - You already see signs of expansion demand, for example repeated plan limit hits, frequent add on requests, or manual upgrade tickets. - You want to protect the self serve feel of the product while still making room for sales assisted or enterprise expansions where appropriate. ## System approach - Design expansion as part of the product experience: clear limits, upgrade paths, and in product affordances that are tied to real moments of value. - Reserve heavier human processes for complex or high risk changes, and let straightforward expansions stay self serve. - Instrument expansion journeys so you can see where intent appears, where friction accumulates, and where human help is genuinely needed. ## Execution steps - Map all the ways an account can expand today, including seats, usage, features, and plans; document which paths are self serve, which require humans, and why. - Analyze where users currently encounter limits or friction, such as hitting a quota, attempting to add teammates, or accessing premium features. - Design in product upgrade paths that appear at natural moments of value, for example when a user hits a limit or successfully completes a workflow. - Create clear, progressive upgrade options that match how accounts grow, avoiding one size fits all bundles that force premature conversations. - Define rules for when a human should be looped in, such as large expansions, risky changes, or signals that the account is a strong enterprise prospect. - Instrument every expansion touchpoint, including view of upgrade options, attempts, completions, and drop offs. - Run experiments on copy, pricing presentation, and timing to reduce friction on self serve paths while monitoring for negative effects on trust or adoption. - Review expansion performance regularly with sales, success, and finance to align on where to keep or remove manual controls. ## Metrics to watch - Self-serve upgrade rate from free or lower tiers — Trend up over time. (Track by segment so you can see where in product expansion mechanics work best.) - Time from first expansion signal (for example, limit hit) to completed upgrade — Trend down as paths smooth out. (Long delays indicate friction in pricing clarity, approvals, or UI flows.) - Net revenue retention (NRR) for self-serve segments — Trend toward or above 100%. (Healthy NRR indicates that expansion is offsetting or exceeding churn in the relevant segments.) - Share of expansions requiring manual intervention — Decrease for straightforward, low risk expansions. (Reserve manual steps for high value or complex changes where human judgment adds real value.) ## Failure modes - Hiding upgrade options behind opaque or slow sales processes even for simple expansions. - Over indexing on short term expansion revenue by pushing upgrades at the wrong moment, which can damage trust. - Designing pricing and packaging that does not align with how customers naturally grow usage. - Failing to give sales and success visibility into self serve expansion signals, leading to uncoordinated outreach. --- # Instrument leading indicators, not lagging metrics Source: https://www.skene.ai/resources/playbooks/instrument-leading-indicators Job to be done: Choose signals that predict success before churn or stagnation appears. Tags: Instrumentation, Metrics Last updated: 2026-01-05 Shift your instrumentation from purely lagging outcomes to a small set of predictive product signals that give you early warning and opportunity. ## Problem context - Many PLG dashboards are dominated by lagging outcomes such as revenue, churn, or late stage retention. - By the time a cohort looks bad in those metrics, the users who made it that way are long gone. - Teams instrument what is easiest to count instead of the behaviors that would have predicted success or failure earlier. ## What breaks - You react slowly to emerging problems because you only notice them once they show up in lagging financial or retention numbers. - Experiments are hard to interpret because you do not have stable, intermediate signals that move before long term metrics do. - It becomes difficult to prioritize work because every decision depends on waiting for late stage outcomes. ## When this applies - You already have basic event tracking but most of your regular reporting focuses on revenue and high level retention. - You suspect there are key behaviors that predict success or failure but they are not part of your core metric stack. - You want to make faster, more confident decisions about onboarding, pricing, or feature work. ## System approach - Treat leading indicators as hypotheses: candidate signals you believe will predict outcomes, to be tested with data. - Start from your best customers and your worst cohorts, and look backwards to see what behaviors reliably differ. - Elevate a small, stable set of leading indicators into your primary dashboards and review rituals. ## Execution steps - Clarify the lagging outcomes that matter most for your business, such as activation to paid conversion, D90 retention, or expansion revenue. - For a sample of successful and unsuccessful accounts, analyze early product behavior in the first days or weeks to identify patterns that differ. - Propose a shortlist of candidate leading indicators, such as completion of a specific workflow, collaboration events, or depth of usage in a key feature. - Backtest each candidate: measure how strongly it correlates with your lagging outcomes across multiple cohorts. - Select a very small set of leading indicators to formalize, and document exactly how they are defined and calculated. - Audit and, if needed, improve the instrumentation and data pipelines that feed these indicators so that they are reliable and timely. - Integrate leading indicators into your core dashboards and weekly or monthly reviews, and use them to trigger early interventions or experiments. ## Metrics to watch - Correlation between leading indicators and target outcomes — Remain strong or improve as definitions are refined. (Track at least roughly, for example using simple cohort splits or correlation coefficients.) - Signal coverage across new accounts — Increase share of accounts for which leading indicators can be computed. (Low coverage can hide emerging issues or make signals appear better than they are.) - Time between leading indicator movement and lagging outcome changes — Provide enough lead time to act (for example weeks rather than days). (If indicators move almost simultaneously with outcomes, they may not be useful for intervention.) - Number of decisions or experiments explicitly keyed to leading indicators — Increase as the system matures. (Shows whether indicators are actually used or just reported.) ## Failure modes - Choosing too many indicators, making it hard to know which ones to trust or focus on. - Relying on indicators that are easy to measure but weakly predictive, such as raw login counts. - Frequently changing definitions without backfilling history, making trend analysis impossible. - Treating leading indicators as immutable truths instead of hypotheses to be revisited as the product evolves. --- # Handle regression after product changes Source: https://www.skene.ai/resources/playbooks/handle-regressions-after-product-changes Job to be done: Detect and mitigate drops in usage caused by releases, pricing changes, or UX shifts before they become permanent. Tags: Failure modes, Retention, Operations Last updated: 2026-01-05 Add guardrails around releases and pricing changes so you can see and respond quickly when PLG metrics regress. ## Problem context - Seemingly small product or pricing changes can quietly damage activation, retention, or expansion, especially in PLG systems. - Without explicit guardrails, regressions often show up weeks later in cohorts and revenue, long after the change shipped. - Teams ship improvements and refactors that are hard to roll back and do not have a clear view of whether they helped or hurt users. ## What breaks - You lose trust in experimentation because changes feel risky and outcomes are hard to interpret. - Regression firefighting consumes capacity that could have been spent on deliberate progress. - Users experience sudden friction or confusion that destroys the habits or trust you previously built. ## When this applies - You ship product, UX, or pricing changes frequently enough that regressions are a real risk. - You have at least basic PLG metrics in place but do not consistently check them before and after changes. - Incidents of “something feels off after we shipped X” are common in retrospectives. ## System approach - Treat material product and pricing changes as experiments with explicit hypotheses and guardrail metrics, not just releases. - Use cohort and feature level comparisons to distinguish normal noise from meaningful regressions. - Design predefined mitigation paths such as rollbacks, follow up fixes, or messaging when guardrails are tripped. ## Execution steps - Define which types of changes require regression guardrails, such as onboarding flows, pricing or packaging updates, navigation, or key workflows. - For each qualifying change, write a short hypothesis including which metrics should move and which must not degrade (for example activation, time-to-value, retention). - Set up pre-change baselines for these metrics on the relevant segments and cohorts. - After shipping, monitor early cohorts and feature usage in a dedicated view for at least one or two release cycles. - If guardrail metrics move in the wrong direction beyond an agreed threshold, trigger a defined incident path: investigation, fix, rollback, or targeted communication. - Capture learnings from each regression in a lightweight playbook so future changes in similar areas inherit better guardrails. ## Metrics to watch - Activation rate and time-to-activation before vs after key changes — Remain stable or improve within expected variance. (Segment by change exposure if you are rolling out gradually.) - Retention curves (for example D7, D30) for cohorts exposed to the change — Avoid systematic downward shifts versus prior cohorts. (Look for pattern breaks rather than single point anomalies.) - Usage of affected features or flows — Move in the expected direction without unexpected drop offs. (Sharp declines immediately after a change can indicate usability or value issues.) - Support tickets or negative feedback related to the changed area — Stay within normal bounds. (A spike often correlates with regressions that metrics alone may not fully explain.) ## Failure modes - Treating major changes as routine deployments with no explicit measurement plan. - Overreacting to normal metric noise and rolling back changes that are actually neutral or positive. - Ignoring guardrail breaches because rollbacks are politically difficult or technically expensive. - Focusing only on short term metrics and missing slower moving regressions in retention or revenue. --- # Avoid over-automation in early PLG Source: https://www.skene.ai/resources/playbooks/avoid-over-automation-early-plg Job to be done: Decide what not to automate when PLG maturity is low. Tags: Strategy, Failure modes Last updated: 2026-01-05 Protect learning and trust by keeping some flows manual until you understand user behavior and edge cases well enough to automate safely. ## Problem context - Early PLG teams feel pressure to automate onboarding, routing, and messaging before they fully understand user behavior. - Over-automation can lock in bad assumptions and hide important signals about what users really need. - Once flows are automated, it becomes harder to notice and correct subtle issues that humans would have caught. ## What breaks - Users experience generic, misaligned flows that erode trust instead of feeling guided. - Teams stop hearing about important edge cases because automation shields them from direct contact with users. - Instrumented data reflects the automated path rather than the actual jobs-to-be-done, leading to misleading conclusions. ## When this applies - You are in the early stages of PLG and still learning which journeys, segments, and activation definitions are correct. - Headcount is limited and automation is tempting as a way to “scale” prematurely. - You have not yet run a sufficient number of manual or semi-manual experiments to understand the landscape. ## System approach - Treat manual work as an investment in learning that should precede automation, not as a permanent operating model. - Prioritize automation where behavior and value are well understood, and keep high uncertainty areas manual or semi-manual. - Design automation so that it can be adjusted or rolled back without heroic effort. ## Execution steps - List the core PLG flows you are considering automating, such as onboarding, lifecycle messaging, routing to sales, or upgrade prompts. - For each flow, assess your level of understanding: how confident are you in the underlying jobs-to-be-done, segments, and success criteria. - Start with manual or semi-manual experiments for high uncertainty flows, using humans to make decisions and gather qualitative feedback. - Automate only the stable parts of flows where you see consistent patterns, and keep exceptions or complex cases routed to humans. - Document assumptions you are baking into automation and set explicit review dates to revisit them with new data. - Instrument automated flows carefully so you can see when they behave differently than manual baselines. ## Metrics to watch - Conversion and retention of users going through automated vs manual flows — Automated should match or beat manual before scaling further. (If automated flows underperform, pause rollout and revisit assumptions.) - Volume and quality of qualitative feedback from early users — Remain sufficient to learn from even as automation increases. (A sudden drop in rich feedback can signal that you automated too much.) - Incidents or regressions attributable to automated decisions — Trend down over time. (Track notable failures where automation did the wrong thing so you can improve safeguards.) - Time spent on manual work in clearly stable flows — Trend down as you confidently automate those areas. (Manual work is a learning tool, but unnecessary manual repetition is a sign to consider automation.) ## Failure modes - Automating flows you have never run manually, so you do not know what “good” looks like. - Treating automation decisions as permanent, making it politically or technically hard to reverse them. - Using automation to avoid talking to users, rather than to scale what you already know works. - Leaving instrumentation as an afterthought, so you cannot tell whether automated flows are helping or hurting. --- # Redesign activation when value is delayed Source: https://www.skene.ai/resources/playbooks/delayed-value-activation Job to be done: Create momentum when real value only appears days or weeks later. Tags: Activation, Time-to-value Last updated: 2026-01-05 Introduce intermediate milestones, proxy value, and scaffolding so users stay engaged while waiting for full outcomes. ## Problem context - Many B2B and integration heavy products require days or weeks before users see the full impact of their work. - Traditional activation advice assumes that users can experience value in minutes, which does not match your reality. - Without thoughtful design, users stall or churn during long setup windows, even if the eventual value is high. ## What breaks - Activation metrics look poor because users drop out before they reach the long delayed milestone. - Sales and success struggle to keep champions engaged while they wait for proof points. - Significant engineering and onboarding investment yields little return because the last mile to value is too far away. ## When this applies - Your product depends on data pipelines, integrations, hardware, or organizational changes that take time. - Users cannot realistically see full value within the first session or even the first few days. - You see many half completed setups and trials that never quite reach the finish line. ## System approach - Model activation as a sequence of milestones rather than a single event, with early proxy outcomes that are easier to reach. - Provide scaffolding that makes progress visible and keeps users confident that they are on the right path. - Align humans and automation around nudging users through intermediate steps instead of only celebrating the final outcome. ## Execution steps - Map your current long path to value, from signup through each major dependency (for example data integration, approvals, configuration). - Identify 2–3 meaningful intermediate milestones that users can hit earlier, such as completing configuration, connecting a sandbox, or seeing sample results. - Redefine activation for early cohorts as reaching a strong intermediate milestone plus clear intent to continue, rather than only the final outcome. - Design onboarding, messaging, and human touchpoints to focus on getting users to each intermediate milestone in sequence. - Add progress indicators and expectations in the product so users understand that value will accumulate over time. - Instrument every milestone and analyze where users drop off; concentrate improvements and human help on those points. ## Metrics to watch - Completion rates for intermediate milestones — Trend up as flows and support improve. (Segment by persona or use case to find where progress is hardest.) - Time between milestones (for example signup to first connection, connection to first result) — Trend down over time. (Long gaps are where users lose momentum or internal support.) - Overall activation and D30 retention for users who hit intermediate milestones — Be significantly higher than for those who do not. (Confirms that milestones are meaningful predictors of eventual value.) - Trial or pilot conversion rate for long value path products — Trend up as intermediate activation design improves. (Shows that better scaffolding translates into business outcomes.) ## Failure modes - Creating too many milestones, which makes progress feel slow and bureaucratic. - Picking milestones that are easy to reach but not actually predictive of long term success. - Failing to update activation definitions and dashboards, so teams keep optimizing for the wrong moment. - Relying solely on automated nudges during long setup periods, without any human support for complex cases. --- # Segment activation by account maturity Source: https://www.skene.ai/resources/playbooks/segment-activation-by-maturity Job to be done: Stop treating first-time users and returning accounts as the same activation problem. Tags: Activation, Segmentation, Lifecycle Last updated: 2026-01-05 Define different activation journeys and success criteria for new, returning, and mature accounts so each gets the right path to value. ## Problem context - Activation is often treated as a one time event, even though accounts evolve through multiple phases of adoption. - New, returning, and mature accounts have very different needs, but they see the same onboarding and success metrics. - As products grow more complex, a single activation journey fails to fit all stages of the lifecycle. ## What breaks - Returning accounts are forced through basic onboarding that feels irrelevant, while new accounts are not given enough scaffolding. - Metrics blur together different lifecycle stages, making it hard to know where the real problems are. - Experiments that help one maturity segment can hurt another, but you cannot see it because everything is averaged. ## When this applies - You have a mix of new and existing accounts using the product, and they behave differently. - Some accounts churn and later come back, or expand into new teams and use cases over time. - You have enough data to segment accounts by history, even if the segmentation is basic at first. ## System approach - Define a simple maturity model for accounts, such as new, ramping, established, and expanding. - Specify separate activation definitions and journeys for at least new versus returning or expanding accounts. - Instrument and report activation and retention by maturity segment so you can target improvements precisely. ## Execution steps - Analyze your account base to see common lifecycle patterns, such as initial adoption, periods of low usage, reactivation, and expansion. - Draft a maturity model with clear criteria for each stage that can be derived from product and billing data. - Define distinct activation goals for new versus returning or expanding accounts, reflecting the different jobs they are trying to do. - Adjust onboarding, in product prompts, and success plays so that they route accounts into the appropriate journeys based on maturity. - Update dashboards so activation and retention are segmented by maturity stage, not only by plan or channel. - Run targeted experiments for specific maturity segments and compare results against segment level baselines. ## Metrics to watch - Activation rate by maturity segment — Trend up within each segment. (Avoid judging success only on aggregate activation which can mask segment specific issues.) - Reactivation rate for returning or previously churned accounts — Trend up as tailored journeys improve. (Shows whether segmented activation helps bring back accounts that stalled before.) - Expansion and retention metrics for established and expanding accounts — Trend up as maturity aware flows reduce friction. (Healthy expansion among mature accounts indicates that activation at later stages is working.) - Share of accounts in each maturity segment over time — Shift toward healthy distributions (for example more established and expanding, fewer stuck in ramping). (Helps you see whether accounts are progressing or getting stuck in early stages.) ## Failure modes - Creating an overly complex maturity model that few people understand or use. - Defining segments that depend on data you cannot reliably compute. - Failing to actually change journeys or metrics after defining maturity stages. - Treating maturity as a static attribute instead of something accounts move through over time. --- # Decide when PLG should stop Source: https://www.skene.ai/resources/playbooks/decide-when-plg-should-stop Job to be done: Recognize when a pure PLG motion is the wrong tool for the next stage. Tags: Strategy, Failure modes Last updated: 2026-01-05 Use clear criteria to decide when to rebalance from pure PLG toward hybrid or sales led motions without abandoning what works. ## Problem context - PLG is often adopted as an article of faith, even when the product, market, or go to market reality makes a pure PLG motion a poor fit. - Teams hesitate to question PLG strategies because doing so can feel like admitting failure rather than learning. - As the company evolves, the motion that worked at one stage may no longer be the right primary growth engine. ## What breaks - You continue to invest heavily in self serve infrastructure that no longer drives the majority of growth. - Sales, marketing, and product pull in different directions because they implicitly assume different primary motions. - The organization keeps chasing PLG benchmarks that do not match the economics or buying behavior of your market. ## When this applies - You have run PLG experiments long enough to have clear data on activation, retention, and expansion. - Large deals, complex implementations, or procurement processes are now a significant part of your pipeline. - There is internal debate about whether to double down on PLG, pivot to sales-led, or run a hybrid model. ## System approach - Evaluate PLG as one possible operating system among several, using explicit criteria and data rather than ideology. - Look at fit by segment: some parts of your business may remain PLG-first while others move toward sales-led or hybrid. - Design a transition plan that preserves the best of PLG (product quality, instrumentation) while changing how growth is orchestrated. ## Execution steps - Clarify what PLG means in your context today, including which segments and products it actually applies to. - Assess strategic fit using criteria such as self serve viability, deal size, implementation complexity, and procurement friction. - Analyze unit economics and pipeline composition to see where PLG is working well and where it is struggling. - Develop scenarios for the next 12–24 months, such as PLG-first, sales-led, or hybrid, and model their implications for org structure, metrics, and investment. - Decide explicitly where PLG should remain primary, where it should become a supporting motion, and where it should be paused or stopped. - Communicate the decision and its rationale clearly to product, sales, marketing, and success, tying it back to data and fit rather than ideology. - Adjust roadmaps, metrics, and incentives to match the chosen balance, and revisit the decision periodically as the market and product evolve. ## Metrics to watch - Unit economics of PLG cohorts versus sales-led or hybrid cohorts — Use to guide where PLG remains primary. (Compare acquisition cost, payback periods, and lifetime value across motions.) - Share of revenue and pipeline coming from PLG-sourced accounts — Track trends as you rebalance motions. (Helps avoid unintentionally starving a motion that still works well for specific segments.) - Activation, retention, and expansion by segment and motion — Use to identify where PLG is structurally weak or strong. (Do not rely on overall averages; look at the combination of product type and go to market model.) - Organizational alignment indicators (for example, deal confusion, handoff conflicts) — Trend down after decisions are made and communicated. (Qualitative, but important to monitor through retrospectives and leadership check ins.) ## Failure modes - Treating PLG as a binary “on or off” switch instead of a spectrum and portfolio of motions. - Declaring the end of PLG without preserving the valuable infrastructure and practices it introduced. - Allowing ideology to dominate the discussion instead of grounding decisions in product and market fit. - Failing to adjust metrics and incentives, so teams continue optimizing for a motion that is no longer primary. --- # Docs — /resources/docs/cloud/features Source: https://www.skene.ai/resources/docs/cloud/features Features & Deploy An **action** in Skene is a growth automation rule: when something happens in your database, take an action. Features are compiled into state machines and deployed as Postgres triggers. In the workspace sidebar, configured loops are listed under **Actions** (/workspace//agent-actions) when your Supabase connection is read-write. Three ways to create features 1. Premade templates The fastest way to get started. Open **Actions** and create from a template library: | Template | Trigger | Action | Use case | |---|---|---|---| | **Heartbeat** | INSERT on any table | Fire analytics event | Track new entities, feature usage | | **Welcome Email Drip** | INSERT on users table | Send welcome email via Resend | Onboarding, first-time experience | | **Inactivity Re-engagement** | Scheduled (7+ days idle) | Send re-engagement email | Retention, win-back | Each template includes a pre-configured trigger, data requirements, and action. During deploy, Skene matches the template to your schema and asks you to confirm the table mapping. 2. Skene Agent Describe what you want in natural language. The Agent is available from: - The **Agent** page (dedicated chat interface) - Context panels where loops are edited - The **Overview** workspace home Example prompts: - "Send a welcome email when a user signs up" - "Track when a user upgrades their plan" - "Alert me when a user hasn't logged in for 14 days" The Agent introspects your schema, proposes a feature definition (trigger table, conditions, action), and can deploy it directly. You can refine through conversation. 3. CLI push The skene CLI (/resources/docs/skene) analyzes your codebase and generates growth loop definitions locally. Push them to Cloud: [code:bash] uvx skene analyze . # Analyze codebase uvx skene plan # Generate growth plan uvx skene build # Create action definitions uvx skene push # Push to Skene Cloud skene push sends artifacts to POST /api/v1/push, which stores them as a deploy snapshot. Pushed artifacts include the engine manifest, feature registry, and schema analysis — these feed into the Cloud's compile pipeline. You can also link a GitHub repository on the **Agent Engine** page. Skene pulls skene/engine.yaml and feature files from the repo and syncs them automatically. Compiling features Compilation transforms a feature definition into a deployable state machine. Click **Compile** on the **Actions** or **Agent Engine** page. During compilation, Skene: 1. **Introspects your schema** via the connected Supabase project 2. **Matches the feature** to a target table and operation using LLM-powered schema matching 3. **Generates conditions** — state checks, cooldown periods, max fire limits 4. **Generates effects** — state transitions, journey events, metadata updates 5. **Derives lifecycle definitions** — which stages the feature operates across Conditions Conditions control when a feature fires: | Condition | Example | |---|---| | **State check** | Only fire if user is in "trial" stage | | **Cooldown** | Don't fire more than once per 7 days | | **Max fires** | Only fire once per entity lifetime | Effects Effects describe what happens when a feature fires: | Effect | Example | |---|---| | **State transition** | Move user from "trial" to "onboarded" | | **Record event** | Log "welcome_email_sent" on the journey | | **Set metadata** | Store onboarding_step: 2 on the entity | Deploying to Supabase After compilation, click **Deploy** to push triggers to your Supabase project. This requires a read-write Supabase connection (/resources/docs/cloud/integrations/supabase#connect-your-supabase-project). Each deployed feature creates a Postgres trigger that: 1. Captures row data and enrichment context 2. Sends the event to Skene via pg_net webhook 3. Authenticates with a per-workspace encrypted secret See Supabase Integration — Deploying triggers (/resources/docs/cloud/integrations/supabase#deploying-triggers) for full details on what gets deployed and how trigger reconciliation works. Feature lifecycle Features move through these statuses: [code] Draft → Active ↔ Paused | Status | Trigger state | Events processed? | |---|---|---| | **Draft** | Not deployed | No | | **Active** | Deployed and firing | Yes — conditions evaluated, actions executed | | **Paused** | Deployed but suppressed | Events logged but actions skipped | You can toggle between Active and Paused from the **Actions** page (or the loop’s detail view). Deleting a feature archives it and removes the trigger from your database. Feature detail page Each feature has a detail page where you can configure: - **Trigger** — table, operation type, conditions - **Data logic** — enrichment joins, related data to fetch - **Action** — tool type (email, webhook, analytics, notification), tool-specific config - **Email templates** — AI-generated HTML templates for email actions - **Compiler overrides** — advanced tuning for conditions and effects - **Test trigger** — send a test event to verify the feature works Next steps - Logs (/resources/docs/cloud/logs) — Monitor feature decisions and action results - Supabase Integration (/resources/docs/cloud/integrations/supabase) — Trigger deployment details - Schema Analysis (/resources/docs/cloud/schema-analysis) — How schema matching works --- # Docs — /resources/docs/cloud/integrations/supabase Source: https://www.skene.ai/resources/docs/cloud/integrations/supabase Supabase Integration Skene connects to your Supabase project via OAuth and deploys growth automation triggers directly into your Postgres database. Events fire inside your database and flow back to Skene for intelligent dispatch — no data pipeline, no middleware, no ETL. How it works 1. **Connect** — Link your Supabase project via OAuth from the workspace Integrations page. 2. **Define** — Describe a growth feature in plain English or pick a premade template. Skene compiles it into a state machine with conditions, effects, and lifecycle stages. 3. **Deploy** — One click pushes database triggers into your Supabase project. Trigger reconciliation keeps your database in sync. 4. **Run** — Row changes fire triggers inside Postgres. Events are sent to Skene via pg_net webhook, where the runtime evaluates conditions, executes actions, and logs every decision. Prerequisites - A Supabase (https://supabase.com) project - The pg_net extension enabled in your Supabase project (used by deployed triggers to send webhooks) - A Skene workspace Connect your Supabase project 1. Open Integrations In the workspace sidebar, open **Integrations** (under **Manage** — next to Logs and API Keys). You can also go directly to /workspace//integrations. 2. Choose access mode Skene offers two OAuth access levels: | Mode | What it allows | When to use | |------|---------------|-------------| | **Read-only** | Schema introspection and analysis | Exploring your schema before committing to deployment | | **Read-write** | Schema introspection + trigger deployment | Deploying and managing growth features | Click **Connect Supabase** and choose your access mode. You will be redirected to Supabase to authorize. 3. Select a project After authorizing, you will be redirected back to Skene. If your Supabase organization has multiple projects, select the one you want to link from the dropdown. 4. Verify connection Once linked, the Integrations page shows your connection status: - **Project ref** and name - **Access mode** (read-only or read-write) - **Schema status** — whether the skene_growth schema and required tables are present - **Extension status** — whether pg_net is enabled Schema setup When you deploy your first feature, Skene creates a skene_growth schema in your Supabase project with: | Object | Purpose | |--------|---------| | skene_growth.event_log | Captures trigger payloads before they are sent to Skene | | skene_growth.failed_events | Stores events that failed to dispatch | | skene_growth.enrichment_map | JSON-based data joins for adding context to events | | skene_growth.enrich() | Function that resolves enrichment maps at trigger time | You can also apply the schema migration manually from the Integrations page before deploying. Deploying triggers From the dashboard 1. Create or select a growth action on the **Actions** page. 2. Click **Compile** — Skene introspects your schema and generates a state machine. 3. Click **Deploy** — trigger SQL is pushed to your Supabase project via the OAuth token. From the CLI [code:bash] # Analyze your codebase and generate growth loops uvx skene analyze . uvx skene plan uvx skene build Push to Skene Cloud (which deploys triggers) uvx skene push What gets deployed Each feature creates a Postgres trigger on the target table (e.g., INSERT on auth.users). The trigger: 1. Captures the row data and any enrichment context. 2. Calls net.http_post() (from pg_net) to send the event to your Skene workspace endpoint. 3. Authenticates with a per-workspace secret (skene_proxy_secret), encrypted at rest with AES-256-GCM. Trigger reconciliation Skene automatically reconciles triggers on each compile/deploy cycle: - **Adds** triggers for new or updated features - **Removes** orphan triggers no longer referenced by any feature - **Deduplicates** — multiple features on the same table and operation share one trigger Event flow [code] Your Supabase database └─ Row change fires trigger └─ pg_net POST → www.skene.ai/api/v1/cloud/ingest/db-trigger └─ Skene runtime ├─ Log raw event ├─ Match to features (conditions, cooldowns, state checks) ├─ Execute actions (email, webhook, state transition) └─ Update journey state and logs Events are authenticated via the x-skene-secret header, matched against the workspace's encrypted proxy secret. Premade templates Three templates are available to get started quickly: | Template | Trigger | Action | |----------|---------|--------| | **Heartbeat** | New record inserted | Fire analytics event | | **Welcome Email Drip** | New user signup (INSERT on users table) | Send welcome email via Resend | | **Inactivity Re-engagement** | Scheduled (7+ days inactive) | Send re-engagement email | Monitoring Logs The **Logs** page in your workspace shows: - Raw ingest records with timestamps - Which features matched each event and why - Action dispatch results (success, failure, skipped) - Semantic matcher audit trails for edge cases Trigger status The **Actions** page shows each loop’s deploy status, last deployed timestamp, and linked trigger ID. Security - **OAuth tokens** are encrypted with AES-256-GCM and stored in Skene's database, never in browser storage. - **Proxy secrets** authenticate ingest webhooks — each workspace has a unique secret. - **No customer data stored** beyond event metadata needed for dispatch decisions. - **Read-only mode** available for schema analysis without deployment risk. Disconnecting To unlink your Supabase project, open **Integrations** in the workspace sidebar and click **Disconnect**. This removes the OAuth tokens from Skene but does not remove deployed triggers from your database. To clean up triggers, drop the skene_growth schema: [code:sql] DROP SCHEMA IF EXISTS skene_growth CASCADE; Upgrading access mode To switch from read-only to read-write (or vice versa), disconnect and reconnect with the desired access mode. Troubleshooting "pg_net extension not enabled" Enable it from the Supabase dashboard: **Database → Extensions → Search for pg_net → Enable**. Triggers not firing 1. Check that the feature is in **Active** status (not Draft or Paused). 2. Verify the trigger exists in your database: SELECT * FROM information_schema.triggers WHERE trigger_schema = 'skene_growth'; 3. Check the **Logs** page for ingest errors. OAuth token expired Skene automatically refreshes OAuth tokens. If you see auth errors, disconnect and reconnect from the Integrations page. Next steps - Push (/resources/docs/skene/guides/push) — Deploy growth loops from the CLI - MCP Server (/resources/docs/skene/integrations/mcp-server) — Use skene with AI assistants - Configuration (/resources/docs/skene/guides/configuration) — Config files and environment variables --- # Docs — /resources/docs/cloud/logs Source: https://www.skene.ai/resources/docs/cloud/logs Logs & Monitoring The **Logs** page shows every event that flows through your workspace — from database trigger fires to feature decisions and action results. What gets logged Every time a trigger fires in your Supabase database, Skene logs: | Field | Description | |---|---| | **Timestamp** | When the event was received | | **Table** | Source table and schema (e.g., public.users) | | **Operation** | INSERT, UPDATE, DELETE, or SCHEDULED | | **Trigger** | Name of the database trigger that fired | | **Event data** | Row payload and enrichment context | | **Loop decisions** | Which features matched and what actions were taken | Logs are retained for the last 7 days and paginated at 50 entries per page. Reading log entries Each log entry shows the raw event and a **loop dispatch card** with per-feature outcomes. Dispatch phases For each feature that could match an event, Skene evaluates four phases in order: | Phase | What it checks | Possible outcomes | |---|---|---| | **Precheck** | Subject extraction, required fields present | Pass / Fail | | **Deterministic gate** | Conditions: state check, cooldown, max fires | Pass / Fail / Skip | | **Semantic matcher** | LLM intent matching (fallback if gate fails) | Pass / Fail | | **Action** | Tool selection and execution | Success / Fail | A feature fires only if all phases pass. The dispatch card shows which phase passed or failed and why. Tool selection transparency When a feature fires, Skene selects an action tool. The log shows: - **Path used** — exact match (deterministic) or LLM-ranked - **Confidence score** — how confident the matcher is (must exceed the feature's minimum threshold) - **Ranked candidates** — top tool candidates with individual confidence scores - **Selected tool** — which tool was chosen and the reason Dispatch outcomes Each feature in a log entry shows one of these outcomes: | Outcome | Meaning | |---|---| | **Success** | All phases passed, action executed | | **Failed** | A phase failed (hover to see which one and why) | | **Skipped** | Feature is paused or conditions not met | Debugging common issues Event received but no feature matched - Check that the feature status is **Active** (not Draft or Paused) - Verify the feature's trigger table and operation match the event source - Check the deterministic gate — a cooldown or max fires limit may be blocking Deterministic gate failed - **Subject not found** — the entity ID column couldn't be resolved from the event payload. Check the feature's subject path configuration. - **State check failed** — the entity is in the wrong lifecycle stage. Check lifecycle stage assignments on the workspace journey graph or **Agent Engine** page. - **Cooldown active** — the feature fired recently for this entity. Wait for the cooldown to expire or adjust the cooldown period. - **Max fires reached** — the feature has already fired the maximum number of times for this entity. Semantic matcher failed The semantic matcher is a fallback that uses LLM matching when deterministic resolution fails: - **Low confidence** — the event payload doesn't closely match the feature's trigger and action descriptions. Improve the feature's description text. - **Circuit open** — too many recent failures caused the matcher to temporarily disable. It will auto-recover. Action failed - **Email delivery failure** — check that Resend is configured and the recipient email is valid - **Webhook timeout** — the target URL didn't respond within the timeout window - **Tool not found** — the configured action tool is disabled or doesn't exist Agent activity The **Agent Activity** page (/workspace/[slug]/agent-activity) shows a separate audit trail of all actions taken by the Skene Agent: - Feature creation and edits - Email template generation - Tool executions - Timestamps and associated features Next steps - Features & Deploy (/resources/docs/cloud/features) — Configure features and their conditions - Supabase Integration (/resources/docs/cloud/integrations/supabase) — Trigger deployment and event flow details --- # Docs — /resources/docs/cloud Source: https://www.skene.ai/resources/docs/cloud Skene Cloud Skene Cloud is the hosted platform for automating product-led growth. Connect your Supabase project, analyze your schema, define growth features, and deploy database triggers — all from the dashboard. How it works [code] Connect Supabase → Analyze Schema → Create Features → Compile → Deploy Triggers → Monitor 1. **Connect (/resources/docs/cloud/integrations/supabase)** your Supabase project via OAuth (read-only or read-write). 2. **Analyze (/resources/docs/cloud/schema-analysis)** your database schema. Skene introspects tables, identifies time/trigger/value fields, and suggests lifecycle stages. 3. **Create features (/resources/docs/cloud/features)** using premade templates, the Skene Agent, or by pushing from the CLI. 4. **Compile** features into state machines with conditions, effects, and lifecycle transitions. 5. **Deploy** database triggers directly into your Supabase project. 6. **Monitor (/resources/docs/cloud/logs)** events, feature decisions, and action results in the Logs page. CLI + Cloud The skene CLI (/resources/docs/skene) and Skene Cloud work together. The CLI analyzes your codebase locally and pushes artifacts to the Cloud via POST /api/v1/push: | CLI artifact | Cloud storage path | Purpose | |---|---|---| | engine.yaml | skene/engine.yaml | Feature definitions and state machines | | schema.yaml | skene/schema.yaml | Database schema snapshot | | growth-manifest.json | skene-context/growth-manifest.json | Codebase analysis results | | feature-registry.json | feature-registry/registry.json | Feature tracking across runs | | growth-plan.md | skene-context/growth-plan.md | Growth strategy document | Cloud can also read your schema directly from the connected Supabase project — the CLI path is optional. Both flows feed into the same compile and deploy pipeline. Documentation Setup - Supabase Integration (/resources/docs/cloud/integrations/supabase) — Connect your project, deploy triggers, event flow Using Skene Cloud - Schema Analysis (/resources/docs/cloud/schema-analysis) — Database introspection, TTV map, lifecycle stages - Features & Deploy (/resources/docs/cloud/features) — Create, compile, and deploy growth features - Logs (/resources/docs/cloud/logs) — Monitor events, debug dispatch decisions Administration - Workspace (/resources/docs/cloud/workspace) — Members, API keys, integrations Related - skene CLI (/resources/docs/skene) — Analyze codebases locally, generate growth plans, and push from the terminal --- # Docs — /resources/docs/cloud/schema-analysis Source: https://www.skene.ai/resources/docs/cloud/schema-analysis Schema Analysis Schema analysis is the first step after connecting your Supabase project. Skene reads your database structure, identifies key patterns, and builds a foundation for feature compilation and trigger deployment. What it does Skene introspects your Postgres information_schema to understand: - **Tables and columns** — names, types, constraints, primary keys - **Foreign key relationships** — how tables reference each other - **Schema boundaries** — which schemas are available (public, auth, custom schemas) From this raw structure, Skene builds a **TTV (Time-Trigger-Value) map** and suggests **lifecycle stages** for your product. Running analysis From the dashboard **Workspace home** (/workspace/) is the main overview: run analysis, explore the **TTV (Time-Trigger-Value) journey graph**, filter by subject, and use the overview dock. The flow is typically: 1. **Precheck** — Verify Supabase connection and schema access 2. **Enrich** — Optionally pull context from a linked GitHub repository to improve table labeling 3. **Seed** — Generate lifecycle stage suggestions from the schema 4. **Compile** — Build the TTV map and prepare for feature creation The **Agent Engine** page (/workspace//skene-engine) offers the same graph plus deeper engine tools: compiled loops, engine YAML, GitHub sync, and push artifacts — use it when you are iterating on the engine and manifests alongside schema analysis. From the CLI [code:bash] # Analyze your codebase (produces growth-manifest.json) uvx skene analyze . Push results to Skene Cloud uvx skene push The CLI analyzes your codebase (not the database directly) and pushes the manifest to Cloud. Cloud can then cross-reference CLI analysis with its own schema introspection. TTV map The TTV (Time-Trigger-Value) map scores each table across three dimensions: | Dimension | What it detects | Examples | |---|---|---| | **Time** | Timestamp fields that indicate when events happen | created_at, updated_at, last_login_at | | **Trigger** | State or status fields that indicate transitions | status, state, plan, role | | **Value** | Identity and reference fields that link to entities | user_id, account_id, product_id | Tables with high TTV scores are strong candidates for growth triggers — they represent meaningful user actions and state changes. Lifecycle stages Based on the TTV analysis, Skene suggests lifecycle stages for your product. These represent the journey a user takes: [code] Trial → Onboarded → Active → At Risk → Churned How stages are assigned Skene maps table states to lifecycle stages using field names and patterns: - A status = 'trial' field maps to the **Trial** stage - An onboarding_completed_at timestamp maps to **Onboarded** - Recent activity timestamps map to **Active** - Absence of recent activity maps to **At Risk** Overriding stages On the journey graph (workspace home or Agent Engine), you can: - **Reassign** tables to different lifecycle stages by clicking nodes in the graph - **Add or remove** lifecycle stages from the stage panel - **Reorder** stages to match your product's actual user journey Primary subject detection Skene identifies the **primary subject** — the entity whose lifecycle you're tracking. This is typically a user, account, or organization. Detection uses: - Foreign key relationships (which table do others reference?) - Column naming patterns (user_id, account_id, org_id) - Table structure (the auth.users table in Supabase projects) The primary subject determines how features resolve their target entity when a trigger fires. How analysis feeds into features Schema analysis produces the context that feature compilation needs: 1. **Table mappings** — Which table and operation a feature should trigger on 2. **Column selection** — Which columns to include in trigger payloads (sensitive columns like passwords and tokens are automatically excluded) 3. **Entity resolution** — How to find the user or entity ID from a triggered row 4. **Enrichment** — Which related tables to join for additional context When you create a feature, Skene uses this analysis to automatically suggest the best table, columns, and entity mapping. You can override any suggestion. Next steps - Features & Deploy (/resources/docs/cloud/features) — Create features using the schema analysis results - Supabase Integration (/resources/docs/cloud/integrations/supabase) — Schema setup and trigger deployment details --- # Docs — /resources/docs/cloud/workspace Source: https://www.skene.ai/resources/docs/cloud/workspace Workspace A workspace is the top-level container in Skene Cloud. It holds your features, triggers, logs, integrations, and team members. Each workspace connects to one Supabase project. Members and roles Manage members from **Settings → Members**. | Role | Permissions | |---|---| | **Owner** | Full access. Manage billing, members, integrations, and features. | | **Member** | Create and manage features, view logs, use the Agent. Cannot manage billing or members. | Inviting members Click **Invite** on the Members page, enter an email address, and send the invitation. The invited user receives an email with a link to join the workspace. Removing members Click the remove button next to a member's name. This revokes their access immediately. Owners cannot be removed — transfer ownership first. API keys API keys authenticate external integrations and the skene CLI (/resources/docs/skene). Manage them from the **API Keys** page. Creating a key Click **Create API Key**. The key is shown once — copy it immediately. Keys are prefixed with sk_ws_ and scoped to the workspace. Using API keys API keys authenticate requests to the workspace HTTP API: [code:bash] # CLI authentication uvx skene login # Interactive login stores the key uvx skene push # Uses stored key Direct API usage curl -X POST https://www.skene.ai/api/v1/push \ -H "Authorization: Bearer sk_ws_..." \ -H "Content-Type: application/json" \ -d '{"engine_yaml": "..."}' Keys are accepted in three header formats: - Authorization: Bearer sk_ws_... - X-Skene-Token: sk_ws_... - X-API-Key: sk_ws_... Allowed origins Configure allowed origins for CORS when embedding Skene APIs in client-side applications. Edit the origins list from the API Keys page. Revoking a key Click **Delete** next to a key. This invalidates the key immediately — any integrations using it will stop working. Integrations The **Integrations** page manages external service connections: | Integration | Purpose | Docs | |---|---|---| | **Supabase** | Database connection for schema analysis and trigger deployment | Supabase Integration (/resources/docs/cloud/integrations/supabase) | | **GitHub** | Repository linking for engine manifest sync | Configured on the **Agent Engine** page (/workspace/.../skene-engine) | Workspace API The workspace exposes three API endpoints authenticated with API keys: | Endpoint | Method | Purpose | |---|---|---| | /api/v1/push | POST | Push CLI artifacts (engine YAML, manifests, registry) | | /api/v1/chat/completions | POST | OpenAI-compatible LLM proxy (streaming and non-streaming) | | /api/v1/cloud/ingest/db-trigger | POST | Receive events from deployed Supabase triggers | The Push and Chat Completions endpoints use API key auth. The ingest endpoint uses a per-workspace proxy secret (managed automatically during Supabase setup). Next steps - Supabase Integration (/resources/docs/cloud/integrations/supabase) — Connect your database - Features & Deploy (/resources/docs/cloud/features) — Start creating growth features - skene CLI — Login (/resources/docs/skene/guides/login) — Authenticate the CLI with your API key --- # Docs — /resources/docs/skene/getting-started/installation Source: https://www.skene.ai/resources/docs/skene/getting-started/installation Installation How to install skene using uvx, pip, or from source. Prerequisites - **Python 3.11 or later.** Check your version with python3 --version. --- Option 1: uvx (Recommended) uvx (https://docs.astral.sh/uv/) runs Python CLI tools without installing them globally. This is the fastest way to start using skene. If you don't have uv installed yet: [code:bash] curl -LsSf https://astral.sh/uv/install.sh | sh Then run any skene command directly: [code:bash] uvx skene analyze . uvx skene plan No pip install needed. uvx downloads the package into an isolated environment on first run and caches it for subsequent calls. --- Option 2: pip Install skene into your current Python environment: [code:bash] pip install skene After installation, the skene and skene commands are available on your PATH: [code:bash] skene analyze . skene chat --- Option 3: From source Clone the repository and install in development mode: [code:bash] git clone https://github.com/SkeneTechnologies/skene.git cd skene Using uv (recommended): [code:bash] uv sync Using pip: [code:bash] pip install -e . To include all optional dependencies: [code:bash] pip install -e ".[mcp,ui]" --- Optional extras Install extras for additional functionality: | Extra | What it adds | |-------|--------------| | mcp | MCP server support (skene-mcp entry point). Adds mcp>=1.0.0 and xxhash>=3.0. | | ui | Interactive terminal prompts via questionary>=2.0. | [code:bash] # With uvx (use --from to specify extras) uvx --from "skene[mcp]" skene-mcp With uv uv pip install skene[mcp] With pip pip install skene[mcp] pip install skene[mcp,ui] --- Verifying the installation Run the version check to confirm everything is working: [code:bash] skene --version Expected output: [code] skene 0.2.1 If you used uvx: [code:bash] uvx skene --version --- Next steps Proceed to the Quickstart (/resources/docs/skene/getting-started/quickstart) to configure an LLM provider and run your first codebase analysis. --- # Docs — /resources/docs/skene/getting-started/quickstart Source: https://www.skene.ai/resources/docs/skene/getting-started/quickstart Quickstart Get from zero to a deployed growth loop. **Prerequisites** - Python 3.11 or later - uv (https://docs.astral.sh/uv/) installed (curl -LsSf https://astral.sh/uv/install.sh | sh) - An API key from OpenAI, Google Gemini, or Anthropic -- OR a local LLM running via LM Studio (https://lmstudio.ai/) or Ollama (https://ollama.com/) Setup Create and configure [code:bash] # Create a config file with sensible defaults uvx skene config --init Set up your LLM provider and API key interactively uvx skene config The interactive setup walks you through provider, model, and API key selection. **Tip:** You can skip config setup entirely by passing --api-key and --provider flags directly to each command, or by setting the SKENE_API_KEY and SKENE_PROVIDER environment variables. Analyze, plan, build Analyze your codebase [code:bash] uvx skene analyze . Scans your codebase and generates files in ./skene-context/: - **growth-manifest.json** -- your tech stack, growth features, and opportunities - **growth-template.json** -- a growth template tailored to your business type - **feature-registry.json** -- tracks features across analysis runs Generate a growth plan [code:bash] uvx skene plan Produces a prioritized growth plan with executive summary, opportunities, and a technical execution section. For activation-focused analysis instead of general growth, add --activation. Build an implementation prompt [code:bash] uvx skene build Generates a focused implementation prompt from your growth plan and asks where to send it -- **Cursor**, **Claude**, or **Show** in terminal. Also saves a growth loop definition with telemetry specs to ./skene-context/. **Tip:** Use --target file to skip the interactive menu (useful for scripting). Verify and deploy Check implementation status After implementing the growth loop, verify that all requirements are met: [code:bash] uvx skene status Checks that required files, functions, and patterns exist in your codebase. Each loop is marked **COMPLETE** or **INCOMPLETE**. Add --find-alternatives to use the LLM to find alternative implementations. Push to Supabase and upstream If your project uses Supabase, initialize the base schema and push growth loop telemetry: [code:bash] # One-time: create the base schema migration - to use in deploying the Skene Growth schema yourself. uvx skene init Generate telemetry triggers and push uvx skene login --upstream https://skene.ai/workspace/ uvx skene push What you get Your ./skene-context/ directory contains: | File | Description | |---|---| | growth-manifest.json | Tech stack, growth features, opportunities | | growth-template.json | Growth template tailored to your business type | | feature-registry.json | Features tracked across analysis runs, linked to growth loops | | growth-plan.md | Prioritized growth plan with technical execution details | | implementation-prompt.md | Ready-to-use prompt for your AI coding assistant | | growth-loops/*.json | Growth loop definitions with telemetry specs and verification requirements | Alternative: Quick one-liner If you want to try the analysis without setting up a config file first, pass your API key inline: [code:bash] uvx skene analyze . --api-key "your-key" This uses the default provider (openai) and model (gpt-4o). To use a different provider: [code:bash] uvx skene analyze . --api-key "your-key" --provider gemini --model gemini-3-flash-preview Alternative: Free preview (no API key) If you want to see what skene does before configuring an LLM, simply run analyze without an API key: [code:bash] uvx skene analyze . Without an API key (and no local provider), the command falls back to a sample preview showing the kind of output a full analysis produces. Next steps - Analyze command in depth (/resources/docs/skene/guides/analyze) -- all flags, output customization, excluding folders - Plan command in depth (/resources/docs/skene/guides/plan) -- context directories, activation mode, custom manifest paths - Build command in depth (/resources/docs/skene/guides/build) -- prompt generation, Cursor/Claude integration - Push command in depth (/resources/docs/skene/guides/push) -- Supabase migrations and upstream deployment - Status command in depth (/resources/docs/skene/guides/status) -- growth loop validation and alternative matching - Features (/resources/docs/skene/guides/features) -- managing and exporting the feature registry - Login (/resources/docs/skene/guides/login) -- authenticating with Skene Cloud upstream - Configuration reference (/resources/docs/skene/guides/configuration) -- config files, environment variables, precedence rules - LLM providers (/resources/docs/skene/guides/llm-providers) -- setup for OpenAI, Gemini, Anthropic, LM Studio, Ollama, and generic endpoints --- # Docs — /resources/docs/skene/guides/analyze Source: https://www.skene.ai/resources/docs/skene/guides/analyze Analyze Command The analyze command scans a codebase to detect its technology stack, identify existing growth features, surface growth opportunities, and flag revenue leakage -- all powered by an LLM of your choice. Prerequisites Before running analyze, you need one of the following: - An API key configured for a cloud LLM provider (OpenAI, Gemini, Anthropic). See configuration (/resources/docs/skene/guides/configuration) for setup instructions. - A local LLM server running (LM Studio, Ollama). No API key required. Basic usage Analyze the current directory: [code:bash] uvx skene analyze . Analyze a specific project path: [code:bash] uvx skene analyze ./my-project Save the manifest to a custom location: [code:bash] uvx skene analyze . -o ./output/manifest.json If no API key is configured and you are not using a local provider, the command displays a sample growth analysis preview instead of running a full LLM-powered analysis. Flag reference | Flag | Short | Description | |------|-------|-------------| | PATH | | Path to codebase (default: ., must be an existing directory) | | --output PATH | -o | Output path for growth-manifest.json | | --api-key TEXT | | API key for LLM provider (or SKENE_API_KEY env var) | | --provider TEXT | -p | LLM provider: openai, gemini, anthropic/claude, lmstudio, ollama, generic (aliases: openai-compatible, openai_compatible) | | --model TEXT | -m | Model name (e.g., gpt-4o, gemini-3-flash-preview, claude-sonnet-4-5) | | --base-url TEXT | | Base URL for OpenAI-compatible endpoint (required for generic provider; also SKENE_BASE_URL env var) | | --product-docs | | Generate product-docs.md with user-facing feature documentation (creates v2.0 manifest) | | --features | | Only analyze growth features and update feature-registry.json (skips opportunities and revenue leakage) | | --exclude TEXT | -e | Folder names to exclude from analysis (repeatable). Also configurable in .skene.config as exclude_folders. | | --verbose | -v | Enable verbose output | | --debug | | Log all LLM input/output to .skene/debug/ | | --no-fallback | | Disable model fallback on rate limits. Retries the same model with exponential backoff instead of switching to a cheaper model. | Output files By default, output files are saved to ./skene-context/. You can override this with the -o flag. If the -o path points to a directory or has no file extension, the tool appends growth-manifest.json automatically. growth-manifest.json The primary output. Contains the full analysis results as structured JSON: | Field | Description | |-------|-------------| | version | Schema version ("1.0" or "2.0" with --product-docs) | | project_name | Name of the analyzed project | | description | Brief project description | | tech_stack | Detected stack: framework, language, database, auth, deployment, package_manager, services | | industry | Inferred industry vertical with primary, secondary tags, confidence score, and evidence | | current_growth_features | Features with growth potential, including file paths, detected intent, confidence scores, and growth suggestions | | growth_opportunities | Missing features that could drive growth, with priority levels | | revenue_leakage | Revenue leakage issues with impact assessment and recommendations | | generated_at | ISO 8601 timestamp of when the manifest was generated | feature-registry.json A persistent registry of growth features maintained across analysis runs. Each analysis merges new features into the existing registry: new features are added with first_seen_at, matched features are updated with last_seen_at and marked active, and unmatched features are marked archived. The registry also maps features to growth loops via loop_ids and annotates them with growth pillars. See the features guide (/resources/docs/skene/guides/features) for registry structure and export options. growth-template.json A custom PLG growth template generated alongside the manifest. Contains lifecycle stages tailored to your project's business type and industry. This template is used by the plan command to generate actionable growth plans. product-docs.md (only with --product-docs) A Markdown file containing user-facing product documentation generated from your codebase. See the Product docs mode (#product-docs-mode) section below. Product docs mode The --product-docs flag enables an extended analysis that generates user-facing documentation from your codebase: [code:bash] uvx skene analyze . --product-docs When enabled, the command: 1. Collects product overview information (tagline, value proposition, target audience) 2. Identifies user-facing features with descriptions, usage examples, and categories 3. Generates product-docs.md in the output directory 4. Produces a v2.0 manifest (extends the standard manifest with product_overview and features fields) The v2.0 manifest is a superset of v1.0 -- all standard fields remain present. Features-only mode The --features flag runs a lightweight analysis that only updates the feature registry: [code:bash] uvx skene analyze . --features This mode: 1. Runs the growth features analyzer only (skips opportunities and revenue leakage) 2. Loads existing growth loops from skene-context/growth-loops/ 3. Maps loops to features and updates feature-registry.json 4. Enriches the manifest with loop_ids and growth_pillars This is faster than a full analysis and useful when you want to refresh the feature registry after building new growth loops. Excluding folders Use --exclude (or -e) to skip folders during analysis. This is useful for large repositories where you want to ignore generated code, vendored dependencies, or test fixtures. The flag can be used multiple times: [code:bash] uvx skene analyze . --exclude node_modules --exclude dist --exclude .next You can also set exclusions permanently in your config file. In .skene.config: [code:toml] exclude_folders = ["node_modules", "dist", ".next", "vendor", "__pycache__"] CLI exclusions and config exclusions are merged (deduplicated). CLI flags do not override config -- they add to the list. Debug mode The --debug flag logs all LLM input and output to .skene/debug/: [code:bash] uvx skene analyze . --debug This is useful for: - Troubleshooting unexpected analysis results - Inspecting the prompts sent to the LLM - Verifying that the LLM is receiving the right context You can also enable debug mode permanently in your config file: [code:toml] debug = true Or via environment variable: [code:bash] export SKENE_DEBUG=true Provider-specific examples OpenAI [code:bash] uvx skene analyze . --provider openai --api-key "sk-..." --model gpt-4o Google Gemini [code:bash] uvx skene analyze . --provider gemini --api-key "AI..." --model gemini-3-flash-preview Anthropic [code:bash] uvx skene analyze . --provider anthropic --api-key "sk-ant-..." --model claude-sonnet-4-5 Ollama (local) [code:bash] # Start Ollama first, then: uvx skene analyze . --provider ollama --model llama3.3 No API key required. The default model for Ollama is llama3.3. LM Studio (local) [code:bash] # Start LM Studio server first, then: uvx skene analyze . --provider lmstudio No API key required. Generic / OpenAI-compatible For any OpenAI-compatible API endpoint: [code:bash] uvx skene analyze . \ --provider generic \ --base-url "http://localhost:8080/v1" \ --model my-model The --base-url flag is required when using the generic or openai-compatible provider. You can also set it via the SKENE_BASE_URL environment variable. What happens without an API key If no API key is configured and you are not using a local provider (lmstudio, ollama, generic), the command falls back to a **sample report** -- a preview of the analysis output structure using representative data. This lets you see what a full analysis looks like before committing to an API key. To run the full analysis, provide an API key via any of these methods: 1. --api-key flag 2. SKENE_API_KEY environment variable 3. api_key field in .skene.config or ~/.config/skene/config Next steps - Plan (/resources/docs/skene/guides/plan) -- Generate a growth plan from your manifest using the Council of Growth Engineers - Features (/resources/docs/skene/guides/features) -- Export and manage the feature registry - Configuration (/resources/docs/skene/guides/configuration) -- Set up persistent config so you do not need to pass flags every time --- # Docs — /resources/docs/skene/guides/build Source: https://www.skene.ai/resources/docs/skene/guides/build Build Command The build command extracts the Technical Execution section from your growth plan, uses an LLM to generate a focused implementation prompt, and lets you send that prompt to Cursor, Claude, or view it in the terminal. Prerequisites Before running build, you need: - A growth-plan.md file generated by the plan command. See Plan (/resources/docs/skene/guides/plan) for details. - An API key configured for a cloud LLM provider (OpenAI, Gemini, Anthropic), or a local LLM server running (Ollama). The LLM is used to generate the implementation prompt and a growth loop definition. See configuration (/resources/docs/skene/guides/configuration) for setup instructions. Basic usage Build a prompt using auto-detected plan and configured LLM settings: [code:bash] uvx skene build The command looks for growth-plan.md in ./skene-context/ first, then falls back to the current directory. Override LLM settings: [code:bash] uvx skene build --api-key "your-key" --provider gemini Specify a custom plan file: [code:bash] uvx skene build --plan ./my-plan.md Point to a context directory: [code:bash] uvx skene build --context ./my-context Flag reference | Flag | Short | Description | |------|-------|-------------| | --plan PATH | | Path to growth plan markdown file | | --context PATH | -c | Directory containing growth-plan.md. Auto-detected from ./skene-context/ if not specified. | | --api-key TEXT | | API key for LLM provider (or SKENE_API_KEY env var) | | --provider TEXT | -p | LLM provider: openai, gemini, anthropic/claude, lmstudio, ollama, generic | | --model TEXT | -m | Model name (uses provider default if not provided) | | --base-url TEXT | | Base URL for OpenAI-compatible API endpoint. Required when provider is generic. Also set via SKENE_BASE_URL env var or config. | | --debug | | Log all LLM input/output to .skene/debug/ | | --no-fallback | | Disable model fallback on rate limits. Retries the same model with exponential backoff instead of switching to a cheaper model. | | --target TEXT | -t | Skip the interactive menu and send the prompt directly. Options: cursor, claude, show, file. | | --feature TEXT | -f | Bias toward this feature name when linking the growth loop to a feature in the registry | How it works The build command follows a five-step pipeline: Step 1: Locate the growth plan The command auto-detects the plan file in this order: 1. If --context is specified, looks for growth-plan.md inside that directory 2. Checks ./skene-context/growth-plan.md 3. Checks ./growth-plan.md You can override this with --plan to specify an exact path. Step 2: Extract the Technical Execution section The command parses the growth plan Markdown and extracts the **Technical Execution** section. This section is generated by the plan command's Council of Growth Engineers and contains: - **The Next Build** -- What specific activation loop or feature to build - **Confidence Score** -- A 0%-100% rating of the hypothesis - **Exact Logic** -- The specific flow changes or implementation logic - **Exact Data Triggers** -- Events that signal successful activation - **Sequence** -- The Now / Next / Later roadmap If the Technical Execution section cannot be found, the command exits with an error and prompts you to generate a proper plan first. Step 3: Generate an intelligent prompt with LLM The extracted Technical Execution context is sent to your configured LLM with a meta-prompt. The LLM generates a focused, actionable implementation prompt that: - States the engineering work based on the Technical Execution context - Includes all relevant technical details (logic, triggers, sequence) - References the growth plan file for additional context - Asks for step-by-step implementation with code examples If the LLM call fails, the command falls back to a static template that wraps the Technical Execution content in a basic prompt structure. Step 4: Choose a destination After generating the prompt, the command presents an interactive menu: [code] Where do you want to send this prompt? Cursor (open via deep link) Claude (open in terminal) Show full prompt Cancel Use arrow keys to navigate and Enter to select. If the questionary package is not installed, the command falls back to a numbered menu. If --target is provided, this step is skipped entirely and the command proceeds directly to the specified destination. Step 5: Generate a growth loop definition Regardless of which destination you choose, the command also generates a **growth loop definition** -- a structured JSON file that captures the implementation requirements. This runs in parallel with the destination action. Destinations Cursor Selecting **Cursor** opens the Cursor editor via a deep link (cursor://). The prompt is saved to a file first (.skene-build-prompt.md in the plan's directory), then Cursor is launched with the full prompt content and a reference to the saved file. This works on macOS, Linux, and Windows. Cursor must be installed on the system. Claude Selecting **Claude** launches the Claude CLI (claude) in your current terminal session, passing it a reference to the saved prompt file. This requires the Claude CLI (https://docs.anthropic.com/claude-code) to be installed. Show Selecting **Show** prints the full prompt to the terminal inside a formatted panel. The prompt is also saved to a file so you can copy and use it with any tool. File Selecting **File** (only available via --target file) saves the prompt to a file and exits immediately without opening any editor or printing the full prompt. This is the recommended target for scripting and CI/CD pipelines. In all cases, the prompt is saved to .skene-build-prompt.md in the plan's parent directory (or the configured output directory). Growth loop definitions Every successful build run produces a growth loop JSON file saved to ./skene-context/growth-loops/. The filename follows the pattern _.json. The loop ID is derived from the "Next Build" field in the Technical Execution section, converted to snake_case with phase prefixes removed (e.g., "Phase 1: Share Flag" becomes share_flag). Schema The growth loop JSON conforms to the GROWTH_LOOP_VERIFICATION_SPEC schema: [code:json] { "loop_id": "share_flag", "name": "Share Flag", "description": "Detailed description of the growth loop", "linked_feature": "Social Sharing", "linked_feature_id": "social_sharing", "growth_pillars": ["engagement", "retention"], "requirements": { "files": [ { "path": "src/components/ShareButton.tsx", "purpose": "Share button component for the main dashboard", "required": true, "checks": [ { "type": "function_exists", "pattern": "handleShare", "description": "Share handler function must exist" } ] } ], "functions": [ { "file": "src/components/ShareButton.tsx", "name": "handleShare", "required": true, "signature": "handleShare(url: string) -> void", "logic": "Takes a URL string, generates a shareable link with tracking parameters, copies it to clipboard, and triggers a share_initiated telemetry event." } ], "integrations": [ { "type": "ui_component", "description": "Share button in dashboard header", "verification": "Component renders in dashboard layout" } ], "telemetry": [ { "type": "supabase", "table": "share_events", "operation": "INSERT", "properties": ["share_type", "source_page", "user_id"], "action_name": "share_initiated" } ] }, "dependencies": [], "verification_commands": ["npm test -- --grep ShareButton"], "test_coverage": { "unit_tests": ["ShareButton renders correctly", "handleShare generates valid URL"], "integration_tests": ["Share flow completes end-to-end"], "manual_tests": ["Click share button and verify link is copied"] }, "metrics": { "telemetry_events": ["share_initiated", "share_completed"], "data_actions": ["share_initiated"], "success_criteria": ["Share rate > 5% of active users"] }, "_metadata": { "source_plan_path": "/absolute/path/to/growth-plan.md", "saved_at": "2025-01-15T10:30:00", "run_target": "supabase" } } Key sections of the schema: | Section | Description | |---------|-------------| | linked_feature / linked_feature_id | The feature this loop implements, linked to the feature registry | | growth_pillars | 0-3 of "onboarding", "engagement", "retention" | | requirements.files | Files to create or modify, with verification checks (type, pattern, description) | | requirements.functions | Functions to implement, including signature and logic description | | requirements.integrations | Integration points (CLI flags, API endpoints, UI components, external services) | | requirements.telemetry | Telemetry items with type (supabase or skene_cloud), table/operation, and properties | | dependencies | Other loop IDs this loop depends on | | verification_commands | Commands to verify the implementation | | test_coverage | Unit, integration, and manual test descriptions | | metrics | Telemetry events, data_actions (must match telemetry action_names), and success criteria | | _metadata | Build metadata: source plan path, timestamp, run target | Growth loop files accumulate over time. The plan command reads existing loops and instructs the council not to suggest duplicates, keeping successive iterations complementary. LLM configuration The build command requires LLM configuration. It loads settings from your config file (.skene.config or ~/.config/skene/config) and can be overridden with CLI flags. If neither an API key nor a provider is configured, the command exits with an error listing all configuration options. The LLM is used twice during a build: 1. To generate the implementation prompt from the Technical Execution context 2. To generate the growth loop definition JSON If the LLM fails during prompt generation, the command falls back to a static template. If the LLM fails during growth loop generation, the command produces a minimal loop definition with empty arrays. Debug mode The --debug flag logs all LLM input and output to .skene/debug/: [code:bash] uvx skene build --debug You can also enable debug mode permanently in your config file: [code:toml] debug = true Next steps - Push (/resources/docs/skene/guides/push) -- Push growth loops to Supabase and upstream - Status (/resources/docs/skene/guides/status) -- Check whether growth loop requirements have been implemented in your codebase - Chat (/resources/docs/skene/guides/chat) -- Use the interactive terminal chat for ad-hoc growth analysis - Configuration (/resources/docs/skene/guides/configuration) -- Set up persistent config so you do not need to pass flags every time - CLI Reference (/resources/docs/skene/reference/cli) -- Full reference for all commands and flags --- # Docs — /resources/docs/skene/guides/chat Source: https://www.skene.ai/resources/docs/skene/guides/chat Chat Interactive terminal chat that lets you converse with an LLM about your codebase while it invokes skene tools to gather information. Prerequisites - An API key configured (see Configuration (/resources/docs/skene/guides/configuration)) or a local LLM running - A codebase to analyze Basic usage [code:bash] # Chat about the current directory uvx skene chat Chat about a specific codebase uvx skene chat /path/to/project Using the shorthand (defaults to chat) uvx skene The skene entry point defaults to the chat command, providing a convenient shorthand for interactive sessions. Flags | Flag | Short | Description | Default | |------|-------|-------------|---------| | --api-key | | API key for LLM provider | SKENE_API_KEY env var | | --provider | -p | LLM provider | Config or openai | | --model | -m | LLM model name | Provider default | | --base-url | | Base URL for OpenAI-compatible API endpoint. Required when provider is generic. | SKENE_BASE_URL env var | | --max-steps | | Maximum tool calls per user request | 4 | | --tool-output-limit | | Max tool output characters kept in context | 4000 | | --debug | | Log all LLM input/output to .skene/debug/ | Off | How it works The chat command starts an interactive terminal session where: 1. You type a question or request about your codebase 2. The LLM decides which skene tools to call (analyze, search, read files, etc.) 3. Tool results are fed back to the LLM within the context window 4. The LLM synthesizes a response based on the tool outputs The --max-steps flag controls how many tool calls the LLM can make per request. Increase this for complex queries that require multiple analysis passes. The --tool-output-limit flag controls how much of each tool's output is kept in context to avoid exceeding token limits. Tips for effective use - **Be specific** — "What growth features does this codebase have?" works better than "Tell me about this code" - **Increase max-steps for deep analysis** — Use --max-steps 8 when you want the LLM to do thorough multi-step analysis - **Use debug mode to understand behavior** — --debug logs all LLM interactions so you can see what tools are being called Next steps - Analyze command (/resources/docs/skene/guides/analyze) — Run a full codebase analysis - LLM providers (/resources/docs/skene/guides/llm-providers) — Configure different providers for chat - Configuration (/resources/docs/skene/guides/configuration) — Set defaults so you don't need flags every time --- # Docs — /resources/docs/skene/guides/configuration Source: https://www.skene.ai/resources/docs/skene/guides/configuration Configuration How to configure skene using config files, environment variables, and CLI flags. Configuration priority Settings are loaded in this order (later overrides earlier): [code] 1. User config ~/.config/skene/config (lowest priority) 2. Project config ./.skene.config 3. Env variables SKENE_API_KEY, SKENE_PROVIDER, etc. 4. CLI flags --api-key, --provider, etc. (highest priority) Config file locations | Location | Purpose | |----------|---------| | ./.skene.config | Project-level config (per-project settings) | | ~/.config/skene/config | User-level config (personal defaults) | Both files use TOML format. The user-level path respects XDG_CONFIG_HOME if set. Creating a config file [code:bash] # Create .skene.config in the current directory uvx skene config --init This creates a sample config file with restrictive permissions (0600 on Unix). Interactive editing Running config without flags opens interactive editing: [code:bash] uvx skene config This prompts you for: 1. **LLM provider** — numbered list: openai, gemini, anthropic, lmstudio, ollama, generic 2. **Model** — numbered list of provider-specific models, or enter a custom name 3. **Base URL** — only if generic provider is selected 4. **API key** — password input (masked), with option to keep existing value Viewing current config [code:bash] uvx skene config --show Displays all current configuration values and their sources. Config options | Option | Type | Default | Description | |--------|------|---------|-------------| | api_key | string | — | API key for LLM provider | | provider | string | "openai" | LLM provider name | | model | string | Per provider | LLM model name | | base_url | string | — | Base URL for OpenAI-compatible endpoints | | output_dir | string | "./skene-context" | Default output directory | | verbose | boolean | false | Enable verbose output | | debug | boolean | false | Enable debug logging | | exclude_folders | list | [] | Folder names to exclude from analysis | | upstream | string | — | Upstream workspace URL for push command | Default models by provider | Provider | Default model | |----------|--------------| | openai | gpt-4o | | gemini | gemini-3-flash-preview | | anthropic | claude-sonnet-4-5 | | ollama | llama3.3 | | generic | custom-model | Sample config file [code:toml] # .skene.config API key (can also use SKENE_API_KEY env var) api_key = "your-api-key" LLM provider: openai, gemini, anthropic, claude, lmstudio, ollama, generic provider = "openai" Model (defaults per provider if not set) model = "gpt-4o" Base URL for OpenAI-compatible endpoints (required for generic provider) base_url = "https://your-api.com/v1" Default output directory output_dir = "./skene-context" Enable verbose output verbose = false Enable debug logging (logs LLM I/O to .skene/debug/) debug = false Folders to exclude from analysis Matches by: exact name, substring in folder names, path patterns exclude_folders = ["tests", "vendor"] Environment variables | Variable | Description | Example | |----------|-------------|---------| | SKENE_API_KEY | API key for LLM provider | sk-... | | SKENE_PROVIDER | Provider name | gemini | | SKENE_BASE_URL | Base URL for generic provider | http://localhost:8000/v1 | | SKENE_DEBUG | Enable debug mode | true | | SKENE_UPSTREAM_API_KEY | API key for upstream authentication | sk-upstream-... | | LMSTUDIO_BASE_URL | LM Studio server URL | http://localhost:1234/v1 | | OLLAMA_BASE_URL | Ollama server URL | http://localhost:11434/v1 | Upstream credentials When using skene push to deploy to Skene Cloud, upstream URL, workspace slug, and API key are stored in .skene.config (with 0600 permissions). These fields are managed by skene login and skene logout. See the login guide (/resources/docs/skene/guides/login) for details. Excluding folders Custom exclusions from both the config file and --exclude CLI flags are merged with the built-in defaults. Default exclusions The following directories are always excluded: node_modules, .git, __pycache__, .venv, venv, dist, build, .next, .nuxt, coverage, .cache, .idea, .vscode, .svn, .hg, .pytest_cache. How matching works Exclusion matches in three ways: 1. **Exact name** — "tests" matches a folder named exactly tests 2. **Substring** — "test" matches tests, test_utils, integration_tests 3. **Path pattern** — "tests/unit" matches any path containing that pattern Examples [code:bash] # CLI flags (merged with config file exclusions) uvx skene analyze . --exclude tests --exclude vendor Short form uvx skene analyze . -e planner -e migrations -e docs [code:toml] # In .skene.config exclude_folders = ["tests", "vendor", "migrations", "docs"] Next steps - LLM providers (/resources/docs/skene/guides/llm-providers) — Detailed setup for each provider - CLI reference (/resources/docs/skene/reference/cli) — All commands and flags --- # Docs — /resources/docs/skene/guides/features Source: https://www.skene.ai/resources/docs/skene/guides/features Features Command The features command manages the growth feature registry -- a persistent record of all growth features detected across analysis runs. Prerequisites - A feature-registry.json file generated by the analyze command. Run skene analyze . first. Basic usage Export the feature registry as JSON (to stdout): [code:bash] uvx skene features export Export as Markdown to a file: [code:bash] uvx skene features export --format markdown -o features.md Export as CSV: [code:bash] uvx skene features export --format csv -o features.csv Flag reference | Flag | Short | Description | |------|-------|-------------| | PATH | | Project root (default: .) | | --context PATH | -c | Path to skene-context directory (auto-detected if omitted) | | --format TEXT | -f | Output format: json, csv, markdown (default: json) | | --output PATH | -o | Output file path. Prints to stdout if omitted. | The feature registry The feature registry (feature-registry.json) is automatically maintained by the analyze command. It provides: - **Persistent tracking** -- features are tracked across multiple analysis runs - **Merge-update semantics** -- new features are added, existing features are updated, missing features are archived - **Growth loop mapping** -- features are linked to growth loop definitions via loop_ids - **Growth pillars** -- features are annotated with 0-3 pillars: onboarding, engagement, retention Registry structure [code:json] { "version": "1.0", "updated_at": "2025-01-15T10:30:00", "features": [ { "feature_id": "team_invitations", "feature_name": "Team Invitations", "status": "active", "first_seen_at": "2025-01-10T08:00:00", "last_seen_at": "2025-01-15T10:30:00", "loop_ids": ["invite_flow"], "growth_pillars": ["onboarding", "engagement"] } ], "growth_loops": [ { "loop_id": "invite_flow", "name": "Invite Flow", "linked_feature_id": "team_invitations" } ] } Feature statuses | Status | Meaning | |--------|---------| | active | Feature was detected in the most recent analysis run | | archived | Feature was previously detected but not found in the latest run | Export formats JSON Full registry data including all metadata, timestamps, and loop mappings. CSV Tabular format suitable for spreadsheets and data tools. Includes feature ID, name, status, pillars, and linked loop IDs. Markdown Human-readable format suitable for documentation, wikis, or README files. Features-only analysis To update the feature registry without running a full analysis (skipping opportunities and revenue leakage): [code:bash] uvx skene analyze . --features This is faster than a full analysis and useful when you only need to refresh the feature registry. Next steps - Analyze (/resources/docs/skene/guides/analyze) -- Full codebase analysis that populates the feature registry - Build (/resources/docs/skene/guides/build) -- Generate growth loops linked to features - CLI Reference (/resources/docs/skene/reference/cli) -- Full reference for all commands and flags --- # Docs — /resources/docs/skene/guides/llm-providers Source: https://www.skene.ai/resources/docs/skene/guides/llm-providers LLM Providers How to configure skene with different LLM providers, including cloud APIs and local models. Provider comparison | Provider | Provider flag | Default model | API key required | Notes | |----------|--------------|---------------|-----------------|-------| | OpenAI | openai | gpt-4o | Yes | Default provider | | Gemini | gemini | gemini-3-flash-preview | Yes | Uses v1beta API | | Anthropic | anthropic or claude | claude-sonnet-4-5 | Yes | Both aliases work | | LM Studio | lmstudio | custom-model | No | Local, requires running server | | Ollama | ollama | llama3.3 | No | Local, requires running server | | Generic | generic | custom-model | Depends | Any OpenAI-compatible endpoint | Setting the provider There are three ways to configure your provider, model, and API key: [code:bash] # 1. CLI flags (highest priority) uvx skene analyze . --provider gemini --model gemini-3-flash-preview --api-key "your-key" 2. Environment variables export SKENE_API_KEY="your-key" export SKENE_PROVIDER="gemini" 3. Config file (.skene.config) uvx skene config # Interactive setup See Configuration (/resources/docs/skene/guides/configuration) for the full priority order. OpenAI The default provider. Get an API key at platform.openai.com/api-keys (https://platform.openai.com/api-keys). Any OpenAI model can be used via --model. The default is gpt-4o. [code:bash] uvx skene analyze . --provider openai --api-key "sk-..." gpt-4o is the default, but you can specify any OpenAI model uvx skene analyze . --model gpt-4o-mini --api-key "sk-..." Gemini Google's Gemini models via the v1beta API. Get an API key at aistudio.google.com/apikey (https://aistudio.google.com/apikey). Any Gemini model can be used via --model. The default is gemini-3-flash-preview. [code:bash] uvx skene analyze . --provider gemini --api-key "your-gemini-key" Use a specific model uvx skene analyze . --provider gemini --model gemini-2.5-pro --api-key "your-gemini-key" **Note**: The v1beta API requires the -preview suffix on Gemini 3.x models. Anthropic / Claude Anthropic's Claude models. Get an API key at console.anthropic.com (https://console.anthropic.com/). Both anthropic and claude work as provider names. Any Claude model can be used via --model. The default is claude-sonnet-4-5. [code:bash] uvx skene analyze . --provider anthropic --api-key "sk-ant-..." Or use the "claude" alias uvx skene analyze . --provider claude --api-key "sk-ant-..." Use a specific model uvx skene analyze . --provider claude --model claude-haiku-4-5 --api-key "sk-ant-..." LM Studio Run models locally with LM Studio (https://lmstudio.ai/). No API key required. Use --model to specify whichever model you have loaded in LM Studio. If omitted, skene sends custom-model as the model name (LM Studio typically ignores this and uses whichever model is currently loaded). [code:bash] # Make sure LM Studio is running with a model loaded uvx skene analyze . --provider lmstudio Specify the model name if needed uvx skene analyze . --provider lmstudio --model "your-loaded-model" **Default server URL**: http://localhost:1234/v1 To use a custom port, set the LMSTUDIO_BASE_URL environment variable: [code:bash] export LMSTUDIO_BASE_URL="http://localhost:8080/v1" The provider also accepts lm-studio and lm_studio as aliases. See Troubleshooting (/resources/docs/skene/troubleshooting) for common LM Studio issues. Ollama Run models locally with Ollama (https://ollama.com/). No API key required. Use --model to specify whichever model you have pulled in Ollama. The default is llama3.3. [code:bash] # Pull a model first ollama pull llama3.3 Make sure Ollama is running ollama serve Analyze uvx skene analyze . --provider ollama Specify a model uvx skene analyze . --provider ollama --model mistral **Default server URL**: http://localhost:11434/v1 To use a custom port, set the OLLAMA_BASE_URL environment variable: [code:bash] export OLLAMA_BASE_URL="http://localhost:8080/v1" See Troubleshooting (/resources/docs/skene/troubleshooting) for common Ollama issues. Generic (OpenAI-compatible) Connect to any OpenAI-compatible API endpoint. Requires --base-url or the SKENE_BASE_URL environment variable. [code:bash] # With API key uvx skene analyze . --provider generic --base-url "https://your-api.com/v1" --api-key "your-key" --model "your-model" Local endpoint without API key uvx skene analyze . --provider generic --base-url "http://localhost:8000/v1" --model "local-model" The provider also accepts openai-compatible and openai_compatible as aliases. Rate limiting & fallback When an LLM provider returns a rate limit error, skene automatically falls back to a cheaper model to keep the workflow moving. This is convenient for interactive use but can corrupt results during benchmarking or when you need guaranteed output from a specific model. Disabling fallback Pass --no-fallback to disable model switching. Instead of falling back, the CLI retries the **same** model with exponential backoff and raises an error if all retries are exhausted: [code:bash] uvx skene analyze . --provider gemini --model gemini-3-flash-preview --no-fallback uvx skene plan --no-fallback uvx skene build --no-fallback This flag is available on the analyze, plan, and build commands. Next steps - Configuration (/resources/docs/skene/guides/configuration) — Save provider settings to a config file - Troubleshooting (/resources/docs/skene/troubleshooting) — Fix common provider issues --- # Docs — /resources/docs/skene/guides/login Source: https://www.skene.ai/resources/docs/skene/guides/login Login Command The login and logout commands manage authentication with Skene Cloud upstream, which is required for pushing growth loops and telemetry via the push command. Prerequisites - A Skene Cloud workspace URL (e.g. https://skene.ai/workspace/my-app) - An API token for your workspace. Get your workspace api-key here: https://www.skene.ai/workspace/apikeys Basic usage Log in to upstream: [code:bash] uvx skene login --upstream https://skene.ai/workspace/my-app The command prompts you for a token, validates it against the upstream API, and saves the credentials. Check login status: [code:bash] uvx skene login --status Log out: [code:bash] uvx skene logout Flag reference login | Flag | Short | Description | |------|-------|-------------| | --upstream TEXT | -u | Upstream workspace URL | | --status | -s | Show current login status for this project | logout No options. Removes saved credentials for the current project. How credentials are stored Login saves upstream URL, workspace slug, and API key to .skene.config in the project directory (restrictive permissions 0600). This file is gitignored by default. Logout removes those fields from the same file, preserving other settings. Token resolution When commands need an upstream token, it is resolved in this order: 1. SKENE_UPSTREAM_API_KEY environment variable 2. upstream_api_key field in .skene.config Next steps - Push (/resources/docs/skene/guides/push) -- Push growth loops and telemetry to upstream - Configuration (/resources/docs/skene/guides/configuration) -- Config files, env vars, and priority --- # Docs — /resources/docs/skene/guides/plan Source: https://www.skene.ai/resources/docs/skene/guides/plan Plan Command The plan command generates a strategic growth plan by feeding your manifest and template data to a "Council of Growth Engineers" -- an LLM system prompt that operates as an elite advisory board focused on first-time user activation. Prerequisites Before running plan, you need: - An API key configured for a cloud LLM provider (OpenAI, Gemini, Anthropic), or a local LLM server running (LM Studio, Ollama). See configuration (/resources/docs/skene/guides/configuration) for setup instructions. - Optionally, growth-manifest.json and growth-template.json from a previous analyze run. The plan command works without these files but produces better results when they are available. Basic usage Generate a growth plan using auto-detected context files: [code:bash] uvx skene plan The command looks for growth-manifest.json and growth-template.json in ./skene-context/ (default output from analyze), then falls back to the current directory. Neither file is required -- the command runs with whatever context it finds. Specify context files explicitly: [code:bash] uvx skene plan --manifest ./skene-context/growth-manifest.json --template ./skene-context/growth-template.json Point to a directory containing both files: [code:bash] uvx skene plan --context ./my-context Generate an activation-focused plan: [code:bash] uvx skene plan --activation Flag reference **Note:** The --activation flag was previously called --onboarding in earlier versions. | Flag | Short | Description | |------|-------|-------------| | --manifest PATH | | Path to growth-manifest.json | | --template PATH | | Path to growth-template.json | | --context PATH | -c | Directory containing manifest and template. Auto-detected from ./skene-context/ if not specified. | | --output PATH | -o | Output path for growth plan markdown. Default: ./skene-context/growth-plan.md | | --api-key TEXT | | API key for LLM provider (or SKENE_API_KEY env var) | | --provider TEXT | -p | LLM provider: openai, gemini, anthropic/claude, lmstudio, ollama, generic | | --model TEXT | -m | Model name (e.g., gemini-3-flash-preview, claude-sonnet-4-5) | | --base-url TEXT | | Base URL for OpenAI-compatible API endpoint. Required when provider is generic. Also set via SKENE_BASE_URL env var or config. | | --verbose | -v | Enable verbose output | | --activation | | Generate activation-focused plan using Senior Activation Engineer perspective | | --prompt TEXT | | Additional user prompt to influence the plan generation | | --debug | | Log all LLM input/output to .skene/debug/ | | --no-fallback | | Disable model fallback on rate limits. Retries the same model with exponential backoff instead of switching to a cheaper model. | How it works: the Council of Growth Engineers The plan command uses a specialized system prompt called the **Council of Growth Engineers**. This is not a generic "give me a plan" prompt. The LLM is instructed to role-play as a council operating at the intersection of product, data, and psychology, drawing on decision-making frameworks from elite growth teams at companies like Meta, Airbnb, and Stripe. The council follows strict rules: - **No beginner explanations.** Assumes 99th-percentile competence. - **No generic growth hacks.** If the advice appears on a "Top 10" list, it is discarded. - **No hedging.** The council picks the winning path and kills weak strategies immediately. - **Zero fluff.** Every word must increase signal-to-noise ratio. - **Focus on first actions.** The plan targets first-time user activation, not long-term retention. - **No demos or hardcoded data.** Solutions must deliver real configuration paths or incremental real value. The council generates a structured memo covering eight sections: 1. **Executive Summary** -- High-level overview focused on first-time activation 2. **The CEO's Next Action** -- The single most impactful move to execute in the next 24 hours 3. **Strip to the Core** -- Reframes the problem as a first-action activation challenge 4. **The Playbook** -- Hidden mechanics used by elite growth teams 5. **Engineer the Asymmetric Leverage** -- The one lever that creates 10x activation for 1x input 6. **Apply Power Dynamics** -- Strategy based on controlling onboarding, first value, activation friction, and action clarity 7. **Technical Execution** -- Detailed build plan with confidence scores, exact logic, data triggers, and sequencing 8. **The Memo** -- The complete engineering memo, direct and high-signal The Technical Execution section is particularly important because it feeds directly into the build command. Activation mode The --activation flag switches the system prompt from the Council of Growth Engineers to a **Senior Activation Engineer** perspective. This mode focuses specifically on activation optimization with a different philosophy: [code:bash] uvx skene plan --activation The activation engineer operates under the principle of **progressive revelation** -- treating onboarding not as a one-time event but as a continuous evolution of state. Key concepts: - **The 60-Second Rule.** The first minute determines lifetime value. If the user has not felt the impact of value within 60 seconds, the opportunity is lost. - **Contextual Configuration.** Configuration is friction. Collect information only at the moment of action. - **Data-Driven Correction.** Onboarding flows drift when the product evolves but the flow remains static. The activation memo follows a different structure: 1. **Strip to the Momentum Core** -- Distinguish between "tour" (weak) and "pathway to power" (strong) 2. **The Playbook** -- Hidden mechanics from elite onboarding at Stripe, Linear, Vercel 3. **Engineer the Asymmetric Move** -- The single lever that makes the rest of the product inevitable 4. **Apply Power Dynamics** -- Control of the clock, state, configuration, and signals 5. **Technical Execution** -- The onboarding primitive to deploy, with confidence score and exact logic 6. **The "Generic" Trap** -- Why tooltip tours lead to completion without adoption 7. **Your Next Action** -- The most impactful technical move for the next 24 hours 8. **The Memo** -- The engineering memo Context files The plan command auto-detects context files in this order: 1. If --context is specified, looks inside that directory first 2. Checks ./skene-context/ (default output directory from analyze) 3. Checks the current directory For the manifest: - /growth-manifest.json - ./skene-context/growth-manifest.json - ./growth-manifest.json For the template: - /growth-template.json - ./skene-context/growth-template.json - ./growth-template.json The command also loads any existing **growth loop definitions** from /growth-loops/. When previous growth loops are found, the council is instructed not to suggest duplicate features and to focus on complementary opportunities instead. Output format The plan is saved as a Markdown file (default: ./skene-context/growth-plan.md). If the -o path points to a directory or has no file extension, the tool appends growth-plan.md automatically. The output includes: - The full council memo in Markdown format - An **Implementation Todo List** displayed in the terminal after generation, showing prioritized tasks extracted from the plan After generation, the terminal displays the memo content and a summary todo list. The plan file is what the build command reads to generate implementation prompts. What happens without an API key If no API key is configured and you are not using a local provider (ollama, lmstudio), the command falls back to a **sample report** preview. To run the full plan generation, provide an API key via any of these methods: 1. --api-key flag 2. SKENE_API_KEY environment variable 3. api_key field in .skene.config or ~/.config/skene/config Debug mode The --debug flag logs all LLM input and output to .skene/debug/: [code:bash] uvx skene plan --debug You can also enable debug mode permanently in your config file: [code:toml] debug = true Next steps - Build (/resources/docs/skene/guides/build) -- Turn your growth plan into an implementation prompt and send it to Cursor or Claude - Configuration (/resources/docs/skene/guides/configuration) -- Set up persistent config so you do not need to pass flags every time - LLM Providers (/resources/docs/skene/guides/llm-providers) -- Detailed setup for each supported provider --- # Docs — /resources/docs/skene/guides/push Source: https://www.skene.ai/resources/docs/skene/guides/push Push Command The push command builds Supabase migrations from growth loop telemetry definitions and optionally pushes the artifacts to Skene Cloud upstream. Prerequisites Before running push, you need: - Growth loop definitions with Supabase telemetry in skene-context/growth-loops/. These are generated by the build command. See Build (/resources/docs/skene/guides/build) for details. - For upstream push: authentication via skene login. See Login (/resources/docs/skene/guides/login). Basic usage Generate a Supabase migration from all growth loops with telemetry: [code:bash] uvx skene push Push a specific loop by ID: [code:bash] uvx skene push --loop my_activation_loop Push to upstream (Skene Cloud): [code:bash] uvx skene push --upstream https://skene.ai/workspace/my-app Flag reference | Flag | Short | Description | |------|-------|-------------| | PATH | | Project root directory (default: .) | | --context PATH | -c | Path to skene-context directory (auto-detected if omitted) | | --loop TEXT | -l | Push only this loop by loop_id. If omitted, pushes all loops with Supabase telemetry. | | --upstream TEXT | -u | Upstream workspace URL (e.g. https://skene.ai/workspace/my-app). Resolved from .skene.config or this flag. | | --push-only | | Re-push current output without regenerating migrations | How it works Step 1: Load growth loops The command loads all growth loop JSON files from skene-context/growth-loops/ and filters for loops that have Supabase telemetry (telemetry items with type: "supabase"). Step 2: Generate Supabase migration For each loop with Supabase telemetry, the command generates SQL trigger functions that: - Create a trigger on the specified table for the specified operation (INSERT, UPDATE, or DELETE) - INSERT a row into skene.event_log with the captured properties - Use idempotent DDL (DROP TRIGGER IF EXISTS before CREATE) The migration is written to supabase/migrations/_skene_telemetry.sql. Step 3: Push artifact snapshot to upstream (optional) When an upstream URL is configured (via --upstream or .skene.config), the command sends an artifact payload to POST https://www.skene.ai/api/v1/push. The upstream endpoint accepts only these keys: - engine_yaml - schema_yaml - state_machine_yaml - feature_registry (JSON object or JSON string) - growth_manifest (JSON object or JSON string) - growth_template (JSON object or JSON string) - growth_plan (Markdown string) - product_docs (Markdown string) Extra keys are ignored and empty strings are treated as omitted. This endpoint stores artifact snapshots in skene_deploys for the workspace. It does not apply SQL migrations to the linked Supabase project and does not run the full engine sync pipeline. Base schema Right now init create the sql files required for deploying skene schema to supabase without connecting Skene Cloud at all. The goal is to increase security. Before pushing telemetry migrations, you need the base schema. Run skene init to create it: [code:bash] uvx skene init This creates supabase/migrations/20260201000000_skene_schema.sql with: - skene.event_log — universal sink for allowlisted triggers - skene.failed_events — dead-letter queue for events exceeding retry limits - skene.enrichment_map — rules table for metadata enrichment Telemetry format Growth loops include telemetry definitions that describe what events to capture. The Supabase telemetry type looks like: [code:json] { "type": "supabase", "table": "documents", "operation": "INSERT", "properties": ["id", "name", "created_at"], "action_name": "document_insert" } The push command converts these into SQL trigger functions automatically. Upstream authentication To push to upstream, authenticate first: [code:bash] uvx skene login --upstream https://skene.ai/workspace/my-app The upstream URL can also be resolved from: 1. upstream field in .skene.config (saved by skene login) 2. --upstream CLI flag (highest priority) Next steps - Login (/resources/docs/skene/guides/login) -- Authenticate with Skene Cloud upstream - Build (/resources/docs/skene/guides/build) -- Generate growth loop definitions with telemetry - Status (/resources/docs/skene/guides/status) -- Verify growth loop implementation - CLI Reference (/resources/docs/skene/reference/cli) -- Full reference for all commands and flags --- # Docs — /resources/docs/skene/guides/status Source: https://www.skene.ai/resources/docs/skene/guides/status Status Command The status command validates whether growth loop requirements have been implemented in your codebase, using AST parsing to verify that required files, functions, and patterns are present. Prerequisites Before running status, you need: - A skene-context/growth-loops/ directory containing one or more growth loop JSON files. These are generated by the build command. See Build (/resources/docs/skene/guides/build) for details. - For the --find-alternatives flag: an API key configured for a cloud LLM provider (OpenAI, Gemini, Anthropic) or a local LLM server running. See configuration (/resources/docs/skene/guides/configuration) for setup instructions. Basic usage Check the implementation status of all growth loops in the current project: [code:bash] uvx skene status The command auto-detects skene-context/growth-loops/ relative to the current directory and validates every growth loop JSON file found there. Specify a different project root: [code:bash] uvx skene status ./my-project Point to a specific context directory: [code:bash] uvx skene status --context ./my-project/skene-context Use LLM-powered semantic matching to find alternative implementations for missing requirements: [code:bash] uvx skene status --find-alternatives --api-key "your-key" Flag reference | Flag | Short | Description | |------|-------|-------------| | --context PATH | -c | Path to skene-context directory. Auto-detected from /skene-context/ or ./skene-context/ if not specified. | | --find-alternatives | | Use LLM to search for existing functions that might fulfill missing requirements. Requires an API key. | | --api-key TEXT | | API key for LLM provider (or SKENE_API_KEY env var). Required when --find-alternatives is set. | | --provider TEXT | -p | LLM provider: openai, gemini, anthropic, ollama (uses config if not provided) | | --model TEXT | -m | LLM model name (uses provider default if not provided) | How it works The status command follows a three-step pipeline: Step 1: Locate growth loop definitions The command searches for growth loop JSON files in /growth-loops/. If --context is not provided, it auto-detects the context directory by checking: 1. /skene-context/ (where PATH is the positional argument, defaulting to .) 2. ./skene-context/ If no growth-loops/ directory is found, the command exits with an error. Step 2: Validate requirements Each growth loop JSON defines requirements in two categories: **File requirements** — The command checks that each required file exists and runs verification checks against it: - contains — file contains a literal substring - contains_regex — file matches a regular expression - function_exists — a function with the given name exists (Python AST parsing) - class_exists — a class with the given name exists (Python AST parsing) - import_exists — an import matching the given name exists (Python AST parsing) **Function requirements** — The command checks that each required function exists in the specified file, using Python AST parsing. If an expected signature is provided, it also validates that the actual signature matches. Step 3: Display validation report The command outputs a summary showing: - Total loops validated and how many are complete - Per-loop breakdown with pass/fail status for every file and function requirement - Detailed failure reasons (file not found, function missing, signature mismatch) A loop is marked **COMPLETE** when all its file and function requirements pass. Otherwise it is marked **INCOMPLETE** with a table showing which checks failed. Finding alternative implementations When --find-alternatives is enabled, the command extracts all function definitions from the project using AST parsing, then sends missing requirements to the LLM for semantic matching. This helps discover: - Functions that already fulfill a requirement but have a different name - Existing implementations in unexpected file locations - Partial implementations that could be adapted Alternative matches are displayed below the validation table with a confidence score (only matches above 60% confidence are shown). [code:bash] uvx skene status --find-alternatives The LLM configuration is loaded from your config file and can be overridden with --api-key, --provider, and --model. Example output [code] Project root: /path/to/project Context dir: /path/to/project/skene-context Loops dir: /path/to/project/skene-context/growth-loops Validating Share Flag... File requirement met: src/components/ShareButton.tsx... Function requirement met: handleShare... Loop complete: Share Flag... ╭── Growth Loop Validation — 1/2 loops complete ──╮ ╰────────────────────────────────────────────────────╯ Share Flag (share_flag) COMPLETE (3/3 checks, 12ms) ✅ GROWTH LOOP COMPLETE: Share Flag Onboarding Flow (onboarding_flow) INCOMPLETE (1/3 checks, 8ms) ✅ src/onboarding/welcome.py Exists OK ❌ src/onboarding/progress.py Missing File not found ❌ track_progress src/onboarding/progress.py Missing 1 loop(s) have unmet requirements. Next steps - Build (/resources/docs/skene/guides/build) -- Generate growth loop definitions that the status command validates - Chat (/resources/docs/skene/guides/chat) -- Use the interactive terminal chat for ad-hoc growth analysis - Configuration (/resources/docs/skene/guides/configuration) -- Set up persistent config for LLM settings used by --find-alternatives - CLI Reference (/resources/docs/skene/reference/cli) -- Full reference for all commands and flags --- # Docs — /resources/docs/skene/integrations/mcp-server Source: https://www.skene.ai/resources/docs/skene/integrations/mcp-server MCP Server skene provides an MCP (Model Context Protocol) (https://modelcontextprotocol.io/) server that exposes codebase analysis capabilities to AI assistants like Claude Desktop and Claude Code. The server communicates via stdio and exposes 12 tools organized into tiers by speed and complexity. Tier 1 tools run in under a second with no LLM calls. Tier 2 tools perform LLM-powered analysis with automatic caching. Tier 3 tools combine cached results into final outputs. Installation Install with the mcp optional dependency: [code:bash] # With pip pip install skene[mcp] With uv uv pip install skene[mcp] Or run directly with uvx (no installation required): [code:bash] uvx --from "skene[mcp]" skene-mcp Configuration Claude Desktop Add to your Claude Desktop config file: - **macOS**: ~/Library/Application Support/Claude/claude_desktop_config.json - **Windows**: %APPDATA%\Claude\claude_desktop_config.json **Option 1: Installed locally** [code:json] { "mcpServers": { "skene": { "command": "skene-mcp", "env": { "SKENE_API_KEY": "your-api-key" } } } } **Option 2: Via uvx (no install needed)** [code:json] { "mcpServers": { "skene": { "command": "uvx", "args": ["--from", "skene[mcp]", "skene-mcp"], "env": { "SKENE_API_KEY": "your-api-key" } } } } Claude Code Add to .mcp.json in your project root or ~/.claude/settings.json for global configuration: [code:json] { "mcpServers": { "skene": { "command": "skene-mcp", "env": { "SKENE_API_KEY": "your-api-key" } } } } The uvx variant works identically -- use "command": "uvx" with "args": ["--from", "skene[mcp]", "skene-mcp"]. Available Tools Tier 1: Quick Tools (< 1s, no LLM) These tools run instantly and require no LLM provider configuration. | Tool | Input | Output | Description | |------|-------|--------|-------------| | get_codebase_overview | path | tree, file_counts, total_files, config_files | Directory tree, file counts by extension, detected config files | | search_codebase | path, pattern, directory (default ".") | matches, count | Search files by glob pattern (e.g., **/*.py, src/**/*.ts) | Tier 2: Analysis Tools (5-15s, uses LLM, cached) These tools call the configured LLM provider and cache results per phase. Each tool accepts path (required) and force_refresh (optional boolean, default false). | Tool | Output Key | Description | |------|------------|-------------| | analyze_tech_stack | tech_stack, cached | Framework, language, database, auth, deployment detection | | analyze_product_overview | product_overview, cached | Product name, tagline, description, value proposition from README/docs | | analyze_growth_hubs | current_growth_features, cached | Viral/growth features: invitations, sharing, referrals, payments | | analyze_features | features, cached | User-facing features extracted from source files | | analyze_industry | industry, cached | Industry classification, sub-verticals, business model tags | Tier 3: Generation Tools (5-15s) These tools combine cached analysis results into final outputs. | Tool | Input | Output | Description | |------|-------|--------|-------------| | generate_manifest | path, product_docs, force_refresh | manifest, cached | Combine analysis phases into a GrowthManifest. **Call analyze_tech_stack and analyze_growth_hubs first.** For product_docs=true, also call analyze_product_overview and analyze_features first. | | generate_growth_template | path, business_type, force_refresh | template, cached | Create PLG template with lifecycle stages, milestones, and metrics from a manifest | | write_analysis_outputs | path, product_docs | output_dir, written_files | Write growth-manifest.json, optionally product-docs.md and growth-template.json to ./skene-context/ | Utility Tools | Tool | Input | Output | Description | |------|-------|--------|-------------| | get_manifest | path | manifest or None, exists, manifest_path | Read an existing manifest from disk without re-analyzing | | clear_cache | path (optional) | cleared, message | Clear cached analysis results. Omit path to clear all entries. | Typical Workflow The tools are designed to be called in a specific sequence. Tier 2 analysis tools populate a cache, and Tier 3 generation tools read from that cache. [code] 1. get_codebase_overview -- Understand project structure 2. analyze_tech_stack -- Detect technologies (cached) 3. analyze_growth_hubs -- Find growth features (cached) 4. generate_manifest -- Combine into GrowthManifest (reads cache) 5. generate_growth_template -- Create PLG template (optional) 6. write_analysis_outputs -- Save files to ./skene-context/ For a full analysis with product documentation, add analyze_product_overview and analyze_features before generate_manifest, and pass product_docs=true to both generate_manifest and write_analysis_outputs. Cache Configuration The MCP server caches Tier 2 analysis results to avoid redundant LLM calls. The cache uses a two-layer strategy: - **Memory cache**: In-process dictionary, fastest lookup - **Disk cache**: JSON files in the cache directory, persists across server restarts Invalidation Cache entries are automatically invalidated when: - The TTL expires (default: 1 hour) - **Marker file hashes change** -- any modification to dependency/config files triggers invalidation. Tracked files include: package.json, requirements.txt, pyproject.toml, Cargo.toml, go.mod, Gemfile, composer.json, and their lockfiles. - **Source directory mtimes change** -- modifications to common source directories (src, lib, app, pages, components, api, server, client) trigger invalidation. You can also manually invalidate with the clear_cache tool or by passing force_refresh=true to any analysis tool. Phase-Specific Keys Each analysis phase has its own independent cache entry. Running analyze_tech_stack does not invalidate the cache for analyze_growth_hubs. The phases are: tech_stack, product_overview, current_growth_features, features, industry, manifest, growth_template. Environment Variables | Variable | Description | Default | |----------|-------------|---------| | SKENE_API_KEY | API key for the LLM provider | (required for cloud providers) | | SKENE_PROVIDER | LLM provider: openai, gemini, anthropic, lmstudio, ollama, generic | openai | | SKENE_MODEL | Model name to use | Provider default | | SKENE_CACHE_ENABLED | Enable or disable caching (true/false) | true | | SKENE_CACHE_DIR | Directory for disk cache | ~/.cache/skene-mcp | | SKENE_CACHE_TTL | Cache time-to-live in seconds | 3600 | Using Local LLMs For LM Studio or Ollama, no API key is needed. Set SKENE_PROVIDER in the env block: [code:json] { "mcpServers": { "skene": { "command": "skene-mcp", "env": { "SKENE_PROVIDER": "lmstudio", "SKENE_MODEL": "your-loaded-model" } } } } For Ollama, use "SKENE_PROVIDER": "ollama". The server connects to the default local endpoint for each provider. See LLM Providers (/resources/docs/skene/guides/llm-providers) for details on configuring each provider. Running Manually You can start the MCP server directly for testing or debugging: [code:bash] # Via the entry point skene-mcp Via Python module python -m skene.mcp The server communicates via stdio (standard input/output) as required by the MCP protocol. It is not meant to be run interactively -- it expects JSON-RPC messages on stdin and writes responses to stdout. Next Steps - Configuration (/resources/docs/skene/guides/configuration) -- config file options and environment variable priority - LLM Providers (/resources/docs/skene/guides/llm-providers) -- setup for OpenAI, Gemini, Claude, LM Studio, Ollama - CLI Reference (/resources/docs/skene/reference/cli) -- the full CLI if you prefer command-line usage over MCP --- # Docs — /resources/docs/skene Source: https://www.skene.ai/resources/docs/skene skene A CLI toolkit for analyzing codebases through the lens of Product-Led Growth (PLG) — detecting growth features, revenue leakage, and generating actionable growth plans. What skene does - **Analyzes your codebase** to detect tech stack, growth features, and revenue leakage patterns - **Generates a growth manifest** — structured JSON output documenting your product's growth surface area - **Maintains a feature registry** — persistent tracking of growth features across analysis runs with merge-update semantics - **Creates growth plans** — a Council of Growth Engineers produces 3-5 high-impact growth loops - **Builds implementation prompts** that you can send directly to Cursor, Claude, or display in your terminal - **Pushes growth loops upstream** — generates Supabase telemetry migrations and deploys to Skene Cloud - **Validates growth loop implementation** — AST-based checks verify that required files and functions are present in your codebase - **Provides an MCP server** exposing 12 tools for AI assistants - **Supports multiple LLM providers**: OpenAI, Gemini, Anthropic, LM Studio, Ollama, and any OpenAI-compatible endpoint Core workflow [code:bash] # 1. Create a config file uvx skene config --init 2. Set up your LLM provider and API key interactively uvx skene config 3. Analyze your codebase uvx skene analyze . 4. Generate a growth plan uvx skene plan 5. Build an implementation prompt uvx skene build 6. Login to Skene Cloud uvx skene login 7. Push growth loops to Supabase + upstream uvx skene push Key concepts **Growth manifest** (growth-manifest.json) — The primary output of the analyze command. A structured JSON file containing your project's tech stack, existing growth features, growth opportunities, and revenue leakage issues. **Growth template** (growth-template.json) — A custom PLG template generated alongside the manifest, with lifecycle stages and metrics tailored to your business type. **Growth plan** (growth-plan.md) — A markdown document produced by the plan command for your next growth action. Contains 3-5 selected high-impact growth loops with implementation roadmaps, metrics, and week-by-week timelines. **Growth loops** — Individual loop definitions (JSON) generated by the build command. Each loop includes file/function requirements, integration points, telemetry specs, verification commands, and success metrics. **Feature registry** (feature-registry.json) — A persistent registry of growth features that tracks features across analysis runs. Features are marked active or archived, linked to growth loops, and annotated with growth pillars (onboarding, engagement, retention). **Skene API key** — A single key from Skene Cloud that manages all tokens required to use LLM models (plan, build, chat) and authorizes pushing growth loops upstream. One key replaces per-provider API keys for LLM usage and enables cloud push. Get your key at https://www.skene.ai/workspace/apikeys Documentation Getting started - Installation (/resources/docs/skene/getting-started/installation) — Install via uvx, pip, or from source - Quickstart (/resources/docs/skene/getting-started/quickstart) — End-to-end walkthrough in 5 commands Guides Create - Analyze (/resources/docs/skene/guides/analyze) — The analyze command in depth - Plan (/resources/docs/skene/guides/plan) — Generating growth plans - Build (/resources/docs/skene/guides/build) — Building implementation prompts - Push (/resources/docs/skene/guides/push) — Pushing growth loops to Supabase and upstream Manage - Login (/resources/docs/skene/guides/login) — Authenticating with Skene Cloud upstream - Status (/resources/docs/skene/guides/status) — Checking growth loop implementation status - Features (/resources/docs/skene/guides/features) — Managing and exporting the feature registry - LLM providers (/resources/docs/skene/guides/llm-providers) — Configuring OpenAI, Gemini, Claude, local LLMs - Configuration (/resources/docs/skene/guides/configuration) — Config files, env vars, and priority Experimental - Chat (/resources/docs/skene/guides/chat) — Interactive terminal chat Integrations - MCP server (/resources/docs/skene/integrations/mcp-server) — Using skene with AI assistants Reference - CLI reference (/resources/docs/skene/reference/cli) — All commands and flags - Python API (/resources/docs/skene/reference/python-api) — CodebaseExplorer, analyzers, schemas - Manifest schema (/resources/docs/skene/reference/manifest-schema) — JSON schema for v1.0 and v2.0 manifests Help - Troubleshooting (/resources/docs/skene/troubleshooting) — LM Studio, Ollama, common errors Hosted product - Skene Cloud (/resources/docs/cloud) — Dashboard, Supabase connection, schema analysis, feature deploy, and logs (no CLI required for the full in-browser flow) --- # Docs — /resources/docs/skene/reference/cli Source: https://www.skene.ai/resources/docs/skene/reference/cli CLI Reference Complete reference for every skene command and flag. For in-depth usage of individual commands, see the guides (/resources/docs/skene/guides/analyze). This page is a lookup reference. --- Global Options | Flag | Description | |------|-------------| | --version, -V | Show version and exit | | --help | Show help message and exit | When invoked with no arguments, skene prints help and exits. The shorthand skene (without -growth) defaults to the chat command instead. --- analyze Analyze a codebase and generate growth-manifest.json. Scans your codebase to detect the technology stack, current growth features, and new growth opportunities. Requires an LLM provider (or falls back to a sample preview if no API key is set). [code] skene analyze [PATH] [OPTIONS] Arguments | Argument | Default | Description | |----------|---------|-------------| | PATH | . | Path to codebase directory to analyze (must exist) | Options | Flag | Short | Default | Description | |------|-------|---------|-------------| | --output PATH | -o | ./skene-context/growth-manifest.json | Output path for the manifest file. If a directory is given, growth-manifest.json is appended automatically. | | --api-key TEXT | | $SKENE_API_KEY or config | API key for the LLM provider | | --provider TEXT | -p | config value | LLM provider: openai, gemini, anthropic (or claude), lmstudio, ollama, generic (aliases: openai-compatible, openai_compatible) | | --model TEXT | -m | provider default | LLM model name (e.g. gpt-4o, gemini-3-flash-preview) | | --base-url TEXT | | $SKENE_BASE_URL or config | Base URL for OpenAI-compatible API endpoint. Required when provider is generic. | | --verbose | -v | false | Enable verbose output | | --product-docs | | false | Also generate product-docs.md with user-facing feature documentation | | --features | | false | Only analyze growth features and update feature-registry.json (skips opportunities and revenue leakage) | | --exclude TEXT | -e | config value | Folder names to exclude from analysis. Repeatable: --exclude tests --exclude vendor. Merged with exclude_folders from config. | | --debug | | false | Log all LLM input/output to .skene/debug/ | | --no-fallback | | false | Disable model fallback on rate limits (429). Retries the same model with exponential backoff instead of switching to a cheaper model. | Behavior notes - When no API key is provided and the provider is not local (lmstudio, ollama, generic), the command falls back to a sample preview. - Local providers (lmstudio, ollama, generic) do not require an API key. - The generic provider requires --base-url. See the analyze guide (/resources/docs/skene/guides/analyze) for detailed usage. --- plan Generate a growth plan using the Council of Growth Engineers methodology. Reads the manifest and template produced by analyze, then uses an LLM to create a prioritized growth plan with implementation tasks. [code] skene plan [OPTIONS] Options | Flag | Short | Default | Description | |------|-------|---------|-------------| | --manifest PATH | | auto-detected | Path to growth-manifest.json. Auto-detected from ./skene-context/, ./, or the context directory. | | --template PATH | | auto-detected | Path to growth-template.json. Auto-detected using the same search order. | | --context PATH | -c | auto-detected | Directory containing manifest and template files. Checked before default paths. | | --output PATH | -o | ./skene-context/growth-plan.md | Output path for the growth plan (markdown). If a directory is given, growth-plan.md is appended. | | --api-key TEXT | | $SKENE_API_KEY or config | API key for the LLM provider | | --provider TEXT | -p | config value | LLM provider: openai, gemini, anthropic/claude, lmstudio, ollama, generic | | --model TEXT | -m | provider default | LLM model name | | --base-url TEXT | | $SKENE_BASE_URL or config | Base URL for OpenAI-compatible API endpoint. Required when provider is generic. | | --verbose | -v | false | Enable verbose output | | --activation | | false | Generate an activation-focused plan using a Senior Activation Engineer perspective | | --prompt TEXT | | | Additional user prompt to influence the plan generation | | --debug | | false | Log all LLM input/output to .skene/debug/ | | --no-fallback | | false | Disable model fallback on rate limits (429). Retries the same model with exponential backoff instead of switching to a cheaper model. | Auto-detection order Both --manifest and --template are auto-detected by searching these paths in order: 1. /growth-manifest.json (if --context is set) 2. ./skene-context/growth-manifest.json 3. ./growth-manifest.json Neither file is strictly required; the plan command works with whatever context is available. See the plan guide (/resources/docs/skene/guides/plan) for detailed usage. --- build Build an AI-ready implementation prompt from your growth plan, then choose where to send it. Extracts the Technical Execution section from the growth plan, uses an LLM to generate a focused implementation prompt, and offers interactive delivery options (Cursor deep link, Claude CLI, or display). [code] skene build [OPTIONS] Options | Flag | Short | Default | Description | |------|-------|---------|-------------| | --plan PATH | | auto-detected | Path to the growth plan markdown file. Auto-detected from ./skene-context/growth-plan.md or ./growth-plan.md. | | --context PATH | -c | auto-detected | Directory containing growth-plan.md | | --api-key TEXT | | $SKENE_API_KEY or config | API key for the LLM provider | | --provider TEXT | -p | config value | LLM provider: openai, gemini, anthropic/claude, lmstudio, ollama, generic | | --model TEXT | -m | provider default | LLM model name | | --base-url TEXT | | $SKENE_BASE_URL or config | Base URL for OpenAI-compatible API endpoint. Required when provider is generic. | | --debug | | false | Log all LLM input/output to .skene/debug/ | | --no-fallback | | false | Disable model fallback on rate limits (429). Retries the same model with exponential backoff instead of switching to a cheaper model. | | --target TEXT | -t | interactive | Skip the interactive menu and send the prompt directly. Options: cursor, claude, show, file. | | --feature TEXT | -f | | Bias toward this feature name when linking the growth loop to a feature in the registry | Delivery targets After generating the prompt, an interactive menu asks where to send it: 1. **Cursor** -- opens the prompt via a Cursor deep link 2. **Claude** -- launches the Claude CLI with the prompt file 3. **Show** -- prints the full prompt to the terminal When --target is provided, the interactive menu is skipped entirely. The file target saves the prompt to disk and exits without opening any editor or printing the full content. This is the recommended mode for scripting and subprocess usage. The prompt is always saved to a file in the plan's parent directory regardless of target selection. Behavior notes - Requires a configured LLM (API key + provider). Falls back to a template-based prompt if the LLM call fails. - Also generates and saves a growth loop definition JSON alongside the prompt. - Use --target file for non-interactive pipelines (e.g. analyze && plan && build --target file). See the build guide (/resources/docs/skene/guides/build) for detailed usage. --- status Show implementation status of growth loop requirements. Loads all growth loop JSON definitions from skene-context/growth-loops/ and uses AST parsing to verify that required files, functions, and patterns are implemented. Displays a report showing which requirements are met and which are missing. [code] skene status [PATH] [OPTIONS] Arguments | Argument | Default | Description | |----------|---------|-------------| | PATH | . | Path to the project root directory (must exist) | Options | Flag | Short | Default | Description | |------|-------|---------|-------------| | --context PATH | -c | auto-detected | Path to skene-context directory. Auto-detected from /skene-context/ or ./skene-context/. | | --find-alternatives | | false | Use LLM to search for existing functions that might fulfill missing requirements | | --api-key TEXT | | $SKENE_API_KEY or config | API key for the LLM provider. Required when --find-alternatives is set. | | --provider TEXT | -p | config value | LLM provider: openai, gemini, anthropic, ollama | | --model TEXT | -m | provider default | LLM model name | Context auto-detection When --context is not specified, the command checks these paths in order: 1. /skene-context/ (where PATH is the positional argument) 2. ./skene-context/ The directory must contain a growth-loops/ subdirectory with at least one JSON file. Behavior notes - Validates every growth loop JSON file found in /growth-loops/. - Uses Python AST parsing to verify function and class definitions, import statements, and content patterns. - With --find-alternatives, extracts all functions from the codebase and uses the LLM to find semantic matches for missing requirements (confidence threshold: 60%). - Does not require an API key unless --find-alternatives is enabled. See the status guide (/resources/docs/skene/guides/status) for detailed usage. --- chat Interactive terminal chat with access to skene analysis tools. [code] skene chat [PATH] [OPTIONS] Arguments | Argument | Default | Description | |----------|---------|-------------| | PATH | . | Path to codebase directory | Options | Flag | Short | Default | Description | |------|-------|---------|-------------| | --api-key TEXT | | $SKENE_API_KEY or config | API key for the LLM provider | | --provider TEXT | -p | config value | LLM provider: openai, gemini, anthropic/claude, lmstudio, ollama, generic | | --model TEXT | -m | provider default | LLM model name | | --base-url TEXT | | $SKENE_BASE_URL or config | Base URL for OpenAI-compatible API endpoint. Required when provider is generic. | | --max-steps INT | | 4 | Maximum number of tool calls the LLM can make per user request | | --tool-output-limit INT | | 4000 | Maximum characters of tool output kept in conversation context | | --debug | | false | Log all LLM input/output to .skene/debug/ | Behavior notes - When using the skene shorthand (not skene), running without a subcommand defaults to chat. - Requires an API key unless using a local provider. See the chat guide (/resources/docs/skene/guides/chat) for detailed usage. --- validate Validate a growth-manifest.json file against the GrowthManifest schema. [code] skene validate MANIFEST Arguments | Argument | Required | Description | |----------|----------|-------------| | MANIFEST | Yes | Path to the growth-manifest.json file to validate (must exist) | Behavior notes - Parses the file as JSON, then validates it against the Pydantic GrowthManifest model. - On success, prints a summary table showing project name, version, tech stack, and feature counts. - On failure, prints the validation error and exits with code 1. --- config Manage skene configuration files. [code] skene config [OPTIONS] Options | Flag | Default | Description | |------|---------|-------------| | --init | false | Create a sample .skene.config file in the current directory | | --show | false | Show current configuration values and exit (no interactive editing) | Default behavior (no flags) When invoked without --init or --show: 1. Displays current configuration values (same as --show) 2. Asks whether you want to edit the configuration 3. If yes, launches an interactive setup flow to select provider, model, and enter an API key Configuration load order Configuration is resolved in this order (later sources override earlier ones): 1. User config: ~/.config/skene/config 2. Project config: ./.skene.config 3. Environment variables: SKENE_API_KEY, SKENE_PROVIDER 4. CLI flags See the configuration guide (/resources/docs/skene/guides/configuration) for file format and all supported options. --- push Build Supabase migrations from growth loop telemetry and push artifacts to upstream. Creates idempotent trigger-based migrations that INSERT into event_log for each telemetry-defined table. Optionally pushes growth loops and telemetry SQL to Skene Cloud upstream. [code] skene push [PATH] [OPTIONS] Arguments | Argument | Default | Description | |----------|---------|-------------| | PATH | . | Project root (output directory for supabase/) | Options | Flag | Short | Default | Description | |------|-------|---------|-------------| | --context PATH | -c | auto-detected | Path to skene-context directory. Auto-detected from /skene-context/ or ./skene-context/. | | --loop TEXT | -l | | Push only this loop (by loop_id). If omitted, pushes all loops with Supabase telemetry. | | --upstream TEXT | -u | config | Upstream workspace URL (e.g. https://skene.ai/workspace/my-app). Resolved from .skene.config or this flag. | | --push-only | | false | Re-push current output without regenerating migrations | Behavior notes - Requires growth loops with Supabase telemetry (type "supabase") in skene-context/growth-loops/. - Generates a migration file at supabase/migrations/_skene_telemetry.sql. - When --upstream is provided (or resolved from .skene.config), pushes the package (growth loops + telemetry SQL) to the upstream API. - Use skene login to authenticate before pushing to upstream. See the push guide (/resources/docs/skene/guides/push) for detailed usage. --- login Log in to Skene Cloud upstream for push. [code] skene login [OPTIONS] Options | Flag | Short | Default | Description | |------|-------|---------|-------------| | --upstream TEXT | -u | | Upstream workspace URL (e.g. https://skene.ai/workspace/my-app) | | --status | -s | false | Show current login status for this project | Behavior notes - Saves upstream URL, workspace, and API key to .skene.config with restrictive permissions (0600). - Use --status to check whether you are logged in for the current project. See the login guide (/resources/docs/skene/guides/login) for detailed usage. --- logout Log out from upstream (remove saved token). [code] skene logout Behavior notes - Removes upstream credentials from .skene.config. - Does not invalidate the token server-side. --- init Create the skene base schema migration for Supabase. [code] skene init [PATH] Arguments | Argument | Default | Description | |----------|---------|-------------| | PATH | . | Project root (output directory for supabase/) | Behavior notes - Writes supabase/migrations/20260201000000_skene_schema.sql containing the base schema: event_log, failed_events, enrichment_map tables and supporting functions. - Safe to run repeatedly -- skips if the migration already exists. - Run supabase db push after to apply the migration. --- features Manage the growth feature registry. features export Export the feature registry for use in external tools. [code] skene features export [PATH] [OPTIONS] Arguments | Argument | Default | Description | |----------|---------|-------------| | PATH | . | Project root (to locate skene-context) | Options | Flag | Short | Default | Description | |------|-------|---------|-------------| | --context PATH | -c | auto-detected | Path to skene-context directory | | --format TEXT | -f | json | Output format: json, csv, markdown | | --output PATH | -o | stdout | Output file path. Prints to stdout if omitted. | Behavior notes - Reads feature-registry.json from the context directory. - Requires running analyze first to populate the registry. - Use for integrating with dashboards, Linear, Notion, or documentation. See the features guide (/resources/docs/skene/guides/features) for detailed usage. --- generate (deprecated) This command is deprecated and will be removed. Use analyze --product-docs instead. [code] skene generate [OPTIONS] | Flag | Short | Description | |------|-------|-------------| | --manifest PATH | -m | Path to growth-manifest.json | | --output PATH | -o | Output directory (default: ./skene-docs) | The command prints a deprecation warning and exits with code 1. --- Environment Variables | Variable | Used by | Description | |----------|---------|-------------| | SKENE_API_KEY | analyze, plan, build, chat, status | API key for the LLM provider. Equivalent to --api-key. | | SKENE_BASE_URL | analyze, plan, build, chat | Base URL for OpenAI-compatible endpoints. Equivalent to --base-url. | | SKENE_PROVIDER | config loading | LLM provider override at the environment level. | | SKENE_UPSTREAM_API_KEY | push, login | API key for upstream authentication. | | SKENE_DEBUG | all commands | Enable debug mode (true/false). | --- Exit Codes | Code | Meaning | |------|---------| | 0 | Success | | 1 | Error (invalid input, missing API key, validation failure, or deprecated command) | --- Examples [code:bash] # Full workflow uvx skene config --init uvx skene config uvx skene analyze . uvx skene plan uvx skene build Analyze with explicit provider settings uvx skene analyze ./my-app -p gemini -m gemini-3-flash-preview --api-key "YOUR_KEY" Analyze with a local LLM (no API key needed) uvx skene analyze . -p ollama -m llama3 Analyze with OpenAI-compatible endpoint uvx skene analyze . -p generic --base-url http://localhost:8080/v1 Generate activation-focused plan uvx skene plan --activation Validate a manifest uvx skene validate ./skene-context/growth-manifest.json Interactive chat uvx skene chat . -p openai -m gpt-4o Check growth loop implementation status uvx skene status Check status with LLM-powered alternative matching uvx skene status --find-alternatives --api-key "YOUR_KEY" Features-only analysis (updates feature registry without full analysis) uvx skene analyze . --features Push growth loops to Supabase + upstream uvx skene push uvx skene push --upstream https://skene.ai/workspace/my-app uvx skene push --loop my_loop_id Login/logout from upstream uvx skene login --upstream https://skene.ai/workspace/my-app uvx skene login --status uvx skene logout Initialize Supabase base schema uvx skene init Export feature registry uvx skene features export --format markdown -o features.md Quick preview (no API key, just run analyze without a key) uvx skene analyze . --- # Docs — /resources/docs/skene/reference/manifest-schema Source: https://www.skene.ai/resources/docs/skene/reference/manifest-schema Manifest Schema Reference Complete JSON schema reference for growth manifests produced by skene analysis. Overview skene outputs a structured JSON file called growth-manifest.json that captures everything discovered during codebase analysis. There are two schema versions: | Version | Schema | Description | |---------|--------|-------------| | **1.0** | GrowthManifest | Standard PLG analysis output. Contains tech stack, growth features, opportunities, and revenue leakage. | | **2.0** | DocsManifest | Extended manifest for documentation generation. Inherits all v1.0 fields and adds product_overview and features. | The analyze command produces a v1.0 manifest by default. When run with the --product-docs flag (or via the generate_manifest MCP tool with product_docs: true), it produces a v2.0 manifest instead. Both versions are defined as Pydantic models in src/skene/manifest/schema.py. v1.0 Manifest Example (GrowthManifest) [code:json] { "version": "1.0", "project_name": "my-saas-app", "description": "A SaaS application for team collaboration", "tech_stack": { "framework": "Next.js", "language": "TypeScript", "database": "PostgreSQL", "auth": "NextAuth.js", "deployment": "Vercel", "package_manager": "npm", "services": ["Stripe", "SendGrid"] }, "industry": { "primary": "Productivity", "secondary": ["B2B", "SaaS", "Enterprise"], "confidence": 0.85, "evidence": [ "README mentions 'team collaboration' as primary use case", "Target audience includes 'businesses' and 'teams'" ] }, "current_growth_features": [ { "feature_name": "Team Invitations", "file_path": "src/features/invitations/index.ts", "detected_intent": "Viral growth through team expansion", "confidence_score": 0.85, "entry_point": "/invite", "growth_potential": [ "Add referral tracking", "Implement invite rewards" ] } ], "growth_opportunities": [ { "feature_name": "Analytics Dashboard", "description": "No usage analytics for tracking team activity", "priority": "high" } ], "revenue_leakage": [ { "issue": "Free tier allows unlimited usage without conversion prompts", "file_path": "src/pricing/tiers.py", "impact": "high", "recommendation": "Add usage limits or upgrade prompts to encourage paid conversions" } ], "generated_at": "2025-01-15T10:30:00" } v2.0 Manifest Example (DocsManifest) A v2.0 manifest includes all v1.0 fields plus product_overview and features: [code:json] { "version": "2.0", "project_name": "my-saas-app", "description": "A SaaS application for team collaboration", "tech_stack": { "framework": "Next.js", "language": "TypeScript", "database": "PostgreSQL", "auth": "NextAuth.js", "deployment": "Vercel", "package_manager": "npm", "services": ["Stripe", "SendGrid"] }, "industry": { "primary": "Productivity", "secondary": ["B2B", "SaaS", "Enterprise"], "confidence": 0.85, "evidence": [ "README mentions 'team collaboration' as primary use case", "Target audience includes 'businesses' and 'teams'" ] }, "current_growth_features": [ { "feature_name": "Team Invitations", "file_path": "src/features/invitations/index.ts", "detected_intent": "Viral growth through team expansion", "confidence_score": 0.85, "entry_point": "/invite", "growth_potential": [ "Add referral tracking", "Implement invite rewards" ] } ], "growth_opportunities": [ { "feature_name": "Analytics Dashboard", "description": "No usage analytics for tracking team activity", "priority": "high" } ], "revenue_leakage": [ { "issue": "Free tier allows unlimited usage without conversion prompts", "file_path": "src/pricing/tiers.py", "impact": "high", "recommendation": "Add usage limits or upgrade prompts to encourage paid conversions" } ], "product_overview": { "tagline": "Team collaboration that scales with your organization", "value_proposition": "Simplifies cross-team communication and project tracking, reducing coordination overhead by 40%", "target_audience": "Engineering and product teams at mid-size B2B companies" }, "features": [ { "name": "Real-time Chat", "description": "Instant messaging with threading, mentions, and emoji reactions", "file_path": "src/features/chat/index.ts", "usage_example": "import { ChatProvider } from '@/features/chat'", "category": "Communication" }, { "name": "Project Boards", "description": "Kanban-style boards for tracking tasks and milestones", "file_path": "src/features/boards/index.ts", "usage_example": null, "category": "Project Management" } ], "generated_at": "2025-01-15T10:30:00" } Field Reference GrowthManifest (top-level, v1.0) | Field | Type | Required | Description | |-------|------|----------|-------------| | version | string | No (default: "1.0") | Manifest schema version. | | project_name | string | Yes | Name of the analyzed project. | | description | string \| null | No | Brief description of the project. | | tech_stack | TechStack | Yes | Detected technology stack. | | industry | IndustryInfo \| null | No | Inferred industry/market vertical classification. | | current_growth_features | GrowthFeature[] | No (default: []) | Identified current features with growth potential. | | growth_opportunities | GrowthOpportunity[] | No (default: []) | Growth opportunities to address. | | revenue_leakage | RevenueLeakage[] | No (default: []) | Potential revenue leakage issues. | | generated_at | datetime | No (auto-set) | When the manifest was generated. Always overwritten to current machine time. | DocsManifest (additional fields, v2.0) Inherits all GrowthManifest fields above. The version field defaults to "2.0". | Field | Type | Required | Description | |-------|------|----------|-------------| | version | string | No (default: "2.0") | Manifest schema version for docs-enabled manifests. | | product_overview | ProductOverview \| null | No | High-level product overview for documentation. | | features | Feature[] | No (default: []) | User-facing feature documentation. | TechStack | Field | Type | Required | Description | |-------|------|----------|-------------| | framework | string \| null | No | Primary framework (e.g., "Next.js", "FastAPI", "Rails"). | | language | string | Yes | Primary programming language (e.g., "Python", "TypeScript"). | | database | string \| null | No | Database technology (e.g., "PostgreSQL", "MongoDB"). | | auth | string \| null | No | Authentication method (e.g., "JWT", "OAuth", "Clerk"). | | deployment | string \| null | No | Deployment platform (e.g., "Vercel", "AWS", "Docker"). | | package_manager | string \| null | No | Package manager (e.g., "npm", "poetry", "cargo"). | | services | string[] | No (default: []) | Third-party services and integrations (e.g., "Stripe", "SendGrid", "Twilio"). | GrowthFeature | Field | Type | Required | Description | |-------|------|----------|-------------| | feature_name | string | Yes | Name of the feature or growth area. | | file_path | string | Yes | Primary file path where this feature is implemented. | | detected_intent | string | Yes | Detected purpose or intent of the feature. | | confidence_score | float | Yes | Confidence in the detection, between 0.0 and 1.0. | | entry_point | string \| null | No | Entry point for users (e.g., URL path, function name). | | growth_potential | string[] | No (default: []) | List of growth opportunities specific to this feature. | | loop_ids | string[] | No (default: []) | IDs of growth loops linked to this feature (populated by the feature registry). | | growth_pillars | string[] | No (default: []) | 0-3 growth pillars: "onboarding", "engagement", "retention". | GrowthOpportunity | Field | Type | Required | Description | |-------|------|----------|-------------| | feature_name | string | Yes | Name of the missing feature or opportunity. | | description | string | Yes | Description of what is missing and why it matters. | | priority | "high" \| "medium" \| "low" | Yes | Priority level for addressing this opportunity. | RevenueLeakage | Field | Type | Required | Description | |-------|------|----------|-------------| | issue | string | Yes | Description of the revenue leakage issue. | | file_path | string \| null | No | File path where this issue is detected (if applicable). | | impact | "high" \| "medium" \| "low" | Yes | Estimated impact on revenue. | | recommendation | string | Yes | Recommendation for addressing this issue. | IndustryInfo | Field | Type | Required | Description | |-------|------|----------|-------------| | primary | string \| null | No | Primary industry vertical (e.g., "DevTools", "FinTech", "E-commerce"). | | secondary | string[] | No (default: []) | Supporting tags for sub-verticals or go-to-market nuance (e.g., "B2B", "SaaS"). | | confidence | float \| null | No | Confidence score between 0.0 and 1.0 for the classification. | | evidence | string[] | No (default: []) | Short bullets citing specific repo signals that support the classification. | ProductOverview (v2.0 only) | Field | Type | Required | Description | |-------|------|----------|-------------| | tagline | string \| null | No | Short one-liner describing the product (under 15 words). | | value_proposition | string \| null | No | What problem the product solves and why it matters. | | target_audience | string \| null | No | Who the product is for (e.g., developers, businesses). | Feature (v2.0 only) | Field | Type | Required | Description | |-------|------|----------|-------------| | name | string | Yes | Human-readable feature name. | | description | string | Yes | User-facing description of what the feature does. | | file_path | string \| null | No | Primary file where this feature is implemented. | | usage_example | string \| null | No | Code snippet or usage example. | | category | string \| null | No | Feature category (e.g., "Authentication", "API", "UI"). | Validation Use the validate command to check that a manifest file conforms to the schema: [code:bash] uvx skene validate ./growth-manifest.json Or using the shorthand: uvx skene validate ./growth-manifest.json The command parses the JSON and validates it against the GrowthManifest Pydantic model. On success, it prints a summary table showing the project name, version, tech stack language, and counts of growth features and opportunities. On failure, it prints the validation error and exits with code 1. Note that validate uses the v1.0 GrowthManifest schema. Since DocsManifest (v2.0) inherits from GrowthManifest, a v2.0 manifest will also pass v1.0 validation -- the extra product_overview and features fields are simply ignored. How Manifests Are Generated There are two ways to generate a manifest: 1. The analyze CLI command The primary way to generate a manifest is through the analyze command: [code:bash] # Generate a v1.0 GrowthManifest uvx skene analyze . Generate a v2.0 DocsManifest (includes product_overview and features) uvx skene analyze . --product-docs By default, the manifest is written to ./skene-context/growth-manifest.json. You can change the output path with the --output flag: [code:bash] uvx skene analyze . --output ./my-manifest.json 2. The generate_manifest MCP tool When using skene as an MCP server (/resources/docs/skene/integrations/mcp-server), the generate_manifest tool produces the same output programmatically. It accepts pre-computed analysis results for individual phases (tech stack, industry, features) or auto-analyzes any missing phases. Set the product_docs parameter to true to generate a v2.0 DocsManifest. Notes on generated_at The generated_at field is always overwritten to the current machine time via a Pydantic model validator, regardless of what value the LLM provides during analysis. This ensures the timestamp accurately reflects when the manifest was created on your system. --- # Docs — /resources/docs/skene/reference/python-api Source: https://www.skene.ai/resources/docs/skene/reference/python-api Python API Programmatic access to skene's codebase analysis, manifest generation, and documentation tools. Quick example [code:python] from pathlib import Path from pydantic import SecretStr from skene import CodebaseExplorer, ManifestAnalyzer from skene.llm import create_llm_client async def main(): codebase = CodebaseExplorer(Path("/path/to/repo")) llm = create_llm_client( provider="openai", api_key=SecretStr("your-api-key"), model="gpt-4o", ) analyzer = ManifestAnalyzer() result = await analyzer.run( codebase=codebase, llm=llm, request="Analyze this codebase for growth opportunities", ) manifest = result.data["output"] print(manifest["tech_stack"]) print(manifest["current_growth_features"]) asyncio.run(main()) CodebaseExplorer Safe, sandboxed access to codebase files. Automatically excludes common build/cache directories. [code:python] from pathlib import Path from skene import CodebaseExplorer, DEFAULT_EXCLUDE_FOLDERS Create with default exclusions explorer = CodebaseExplorer(Path("/path/to/repo")) Create with custom exclusions (merged with defaults) explorer = CodebaseExplorer( Path("/path/to/repo"), exclude_folders=["tests", "vendor", "migrations"] ) Methods | Method | Returns | Description | |--------|---------|-------------| | await get_directory_tree(start_path, max_depth) | dict | Directory tree with file counts | | await search_files(start_path, pattern) | dict | Files matching glob pattern | | await read_file(file_path) | str | File contents | | await read_multiple_files(file_paths) | dict | Multiple file contents | | should_exclude(path) | bool | Check if a path should be excluded | Related - build_directory_tree — Standalone function for building directory trees - DEFAULT_EXCLUDE_FOLDERS — List of default excluded folder names Analyzers ManifestAnalyzer Runs a full codebase analysis and produces a growth manifest. [code:python] from skene import ManifestAnalyzer analyzer = ManifestAnalyzer() result = await analyzer.run( codebase=codebase, llm=llm, request="Analyze this codebase for growth opportunities", ) manifest = result.data["output"] TechStackAnalyzer Detects the technology stack of a codebase. [code:python] from skene import TechStackAnalyzer analyzer = TechStackAnalyzer() result = await analyzer.run(codebase=codebase, llm=llm) tech_stack = result.data["output"] GrowthFeaturesAnalyzer Identifies existing growth features in a codebase. [code:python] from skene import GrowthFeaturesAnalyzer analyzer = GrowthFeaturesAnalyzer() result = await analyzer.run(codebase=codebase, llm=llm) features = result.data["output"] Configuration [code:python] from skene import Config, load_config Load config from files + env vars config = load_config() Access properties config.api_key # str | None config.provider # str (default: "openai") config.model # str (auto-determined if not set) config.output_dir # str (default: "./skene-context") config.verbose # bool (default: False) config.debug # bool (default: False) config.exclude_folders # list[str] (default: []) config.base_url # str | None config.upstream # str | None (upstream workspace URL) Get/set arbitrary keys config.get("api_key", default=None) config.set("provider", "gemini") Upstream credentials [code:python] from skene.config import ( save_upstream_to_config, # Save upstream URL, workspace, API key to .skene.config remove_upstream_from_config,# Remove upstream credentials from .skene.config resolve_upstream_token, # Resolve token from env/config ) LLM Client [code:python] from pydantic import SecretStr from skene.llm import create_llm_client, LLMClient client: LLMClient = create_llm_client( provider="openai", # openai, gemini, anthropic, ollama, lmstudio, generic api_key=SecretStr("key"), model="gpt-4o", base_url=None, # Required for generic provider debug=False, # Log LLM I/O to .skene/debug/ ) Manifest schemas All schemas are Pydantic v2 models. See Manifest schema reference (/resources/docs/skene/reference/manifest-schema) for full field details. [code:python] from skene import ( GrowthManifest, # v1.0 manifest DocsManifest, # v2.0 manifest (extends GrowthManifest) TechStack, GrowthFeature, GrowthOpportunity, IndustryInfo, ProductOverview, # v2.0 only Feature, # v2.0 only ) GrowthManifest fields | Field | Type | |-------|------| | version | str ("1.0") | | project_name | str | | description | str \| None | | tech_stack | TechStack | | industry | IndustryInfo \| None | | current_growth_features | list[GrowthFeature] | | growth_opportunities | list[GrowthOpportunity] | | revenue_leakage | list[RevenueLeakage] | | generated_at | datetime | DocsManifest additional fields | Field | Type | |-------|------| | version | str ("2.0") | | product_overview | ProductOverview \| None | | features | list[Feature] | Feature registry [code:python] from skene.feature_registry import ( load_feature_registry, # Load registry from disk write_feature_registry, # Write registry to disk merge_features_into_registry, # Merge new features with existing registry merge_registry_and_enrich_manifest, # Full registry + manifest enrichment pipeline load_features_for_build, # Load active features for build command export_registry_to_format, # Export to json, csv, or markdown derive_feature_id, # Convert feature name to snake_case ID compute_loop_ids_by_feature, # Map feature_id -> list of loop_ids ) Key functions | Function | Description | |----------|-------------| | merge_features_into_registry(new_features, registry) | Merges new features: adds new, updates matched, archives missing | | merge_registry_and_enrich_manifest(manifest, context_dir) | Full pipeline: loads loops, maps to features, writes registry, enriches manifest | | load_features_for_build(context_dir) | Returns active features list for the build command | | export_registry_to_format(registry, format) | Exports to "json", "csv", or "markdown" | Growth loops [code:python] from skene.growth_loops.storage import ( load_existing_growth_loops, # Load all loop JSONs from growth-loops/ write_growth_loop_json, # Write a loop JSON to disk generate_loop_definition_with_llm, # Generate loop definition via LLM derive_loop_id, # Derive loop_id from name derive_loop_name, # Derive name from technical execution ) from skene.growth_loops.push import ( ensure_base_schema_migration, # Create base schema migration build_loops_to_supabase, # Build Supabase migrations from loops build_migration_sql, # Generate migration SQL write_migration, # Write migration file push_to_upstream, # Push to upstream API ) from skene.growth_loops.upstream import ( validate_token, # Validate token via upstream API build_package, # Assemble deployment package build_push_manifest, # Create push manifest with checksum push_to_upstream, # POST package to /api/v1/push ) Plan decline [code:python] from skene.planner.decline import ( decline_plan, # Archive a declined plan with executive summary only load_declined_plans, # Load recent declined plans for reference ) Documentation generation [code:python] from skene import DocsGenerator, GrowthManifest manifest = GrowthManifest.model_validate_json(open("growth-manifest.json").read()) generator = DocsGenerator() context_doc = generator.generate_context_doc(manifest) product_doc = generator.generate_product_docs(manifest) The PSEOBuilder class generates programmatic SEO content from manifests. Strategy framework The analysis pipeline is built on a composable strategy framework: [code:python] from skene.strategies import ( AnalysisStrategy, # Base strategy class AnalysisResult, # Result container with data + metadata AnalysisMetadata, # Timing, token usage, step info AnalysisContext, # Shared context between steps MultiStepStrategy, # Chains multiple steps together ) from skene.strategies.steps import ( AnalysisStep, # Base step class SelectFilesStep, # Select relevant files for analysis ReadFilesStep, # Read file contents AnalyzeStep, # Send to LLM for analysis GenerateStep, # Generate structured output ) These classes are primarily used internally by the analyzers but can be composed for custom analysis pipelines. Planner [code:python] from skene.planner import Planner from skene.planner.schema import GrowthPlan, TechnicalExecution, PlanSection The Planner class generates growth plans from manifests and templates. It is used internally by the plan CLI command. GrowthPlan schema | Field | Type | Description | |-------|------|-------------| | executive_summary | str | High-level summary focused on first-time activation | | sections | list[PlanSection] | Numbered memo sections (1-6) | | technical_execution | TechnicalExecution | Section 7: Technical Execution | | memo | str | Section 8: The closing confidential engineering memo | TechnicalExecution fields | Field | Type | Description | |-------|------|-------------| | next_build | str | What activation loop to build next | | confidence | str | Confidence level, e.g. "85%" | | exact_logic | str | Specific flow changes for first-action completion | | data_triggers | str | Events indicating first meaningful action | | stack_steps | str | Tools, scripts, or structural changes required | | sequence | str | Now / Next / Later priorities | PlanSection fields | Field | Type | Description | |-------|------|-------------| | title | str | Section heading, e.g. "The Next Action" | | content | str | Free-form markdown content | Helper functions - render_plan_to_markdown(plan, project_name, generated_at) — Render a GrowthPlan to the council memo markdown format - parse_plan_json(response) — Parse an LLM response (with optional code fences) into a validated GrowthPlan --- # Docs — /resources/docs/skene/troubleshooting Source: https://www.skene.ai/resources/docs/skene/troubleshooting Troubleshooting Solutions for common issues when using skene. LM Studio Context length error [code] Error code: 400 - {'error': 'The number of tokens to keep from the initial prompt is greater than the context length...'} The model's context length is too small for the analysis. To fix: 1. In LM Studio, unload the current model 2. Go to **Developer > Load** 3. Click on **Context Length: Model supports up to N tokens** 4. Set it to the maximum supported value 5. Reload to apply changes Reference: lmstudio-ai/lmstudio-bug-tracker#237 (https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/237) Connection refused Ensure: - LM Studio is running - A model is loaded and ready - The server is running on the default port (http://localhost:1234) For a custom port: [code:bash] export LMSTUDIO_BASE_URL="http://localhost:8080/v1" Ollama Connection refused Ensure: - Ollama is running (ollama serve) - A model is pulled and available (ollama list) - The server is on the default port (http://localhost:11434) Getting started with Ollama: [code:bash] # Pull a model ollama pull llama3.3 Start the server (usually runs automatically after install) ollama serve For a custom port: [code:bash] export OLLAMA_BASE_URL="http://localhost:8080/v1" API key issues "No API key" or fallback to sample report If analyze runs without an API key, it falls back to showing a sample preview. Set your key using one of: [code:bash] # CLI flag uvx skene analyze . --api-key "your-key" Environment variable export SKENE_API_KEY="your-key" Config file (interactive) uvx skene config Wrong provider for API key Make sure the API key matches the provider. An OpenAI key won't work with --provider gemini. Provider issues Unknown provider Valid provider names: - openai - gemini - anthropic or claude - lmstudio, lm-studio, or lm_studio - ollama - generic, openai-compatible, or openai_compatible Generic provider: missing base URL The generic provider requires a base URL: [code:bash] uvx skene analyze . --provider generic --base-url "http://localhost:8000/v1" --model "your-model" Or set via environment variable: [code:bash] export SKENE_BASE_URL="http://localhost:8000/v1" File not found errors Manifest not found (plan/build commands) The plan and build commands look for files in ./skene-context/ by default. Make sure you've run analyze first: [code:bash] uvx skene analyze . # Creates ./skene-context/growth-manifest.json uvx skene plan # Reads from ./skene-context/ Or specify paths explicitly: [code:bash] uvx skene plan --manifest ./path/to/manifest.json --template ./path/to/template.json uvx skene plan --context ./my-output-dir Growth plan not found (build command) [code:bash] uvx skene plan # Creates ./skene-context/growth-plan.md uvx skene build # Reads from ./skene-context/ Or specify explicitly uvx skene build --plan ./path/to/growth-plan.md Rate limit errors When a provider returns a rate limit error, skene silently falls back to a cheaper model. This keeps the workflow moving but means the output was generated by a different model than configured. If you need output from a specific model (e.g. during benchmarking), use --no-fallback: [code:bash] uvx skene analyze . --no-fallback With --no-fallback, the CLI retries the same model with exponential backoff. If all 3 retries are exhausted, the command raises an error instead of switching models. Push / upstream issues "No token" error If push says "No token", you need to authenticate first: [code:bash] uvx skene login --upstream https://skene.ai/workspace/my-app Or set the token via environment variable: [code:bash] export SKENE_UPSTREAM_API_KEY="your-token" "No growth loops with Supabase telemetry found" The push command requires growth loops that include telemetry items with type: "supabase". Make sure you have run build first: [code:bash] uvx skene build Growth loop files are stored in skene-context/growth-loops/. Check that at least one loop has a requirements.telemetry entry with "type": "supabase". Push authentication failed (401/403) Your token may have expired or be invalid. Log out and log in again: [code:bash] uvx skene logout uvx skene login --upstream https://skene.ai/workspace/my-app Base schema migration missing If push fails because the base schema is missing, run init first: [code:bash] uvx skene init Then apply the migration with supabase db push. Debug mode Use --debug on any command to log all LLM input and output to .skene/debug/: [code:bash] uvx skene analyze . --debug uvx skene plan --debug uvx skene chat --debug Debug mode can also be enabled via environment variable or config: [code:bash] export SKENE_DEBUG=true [code:toml] # .skene.config debug = true The debug logs show the full prompts sent to the LLM and the complete responses, which is useful for diagnosing unexpected output or provider-specific issues. Getting help - GitHub issues: github.com/SkeneTechnologies/skene/issues (https://github.com/SkeneTechnologies/skene/issues) - Documentation: www.skene.ai/resources/docs/skene (https://www.skene.ai/resources/docs/skene)