Skene
CLI docs

skene-growth CLI documentation

Analyze codebases for product-led growth opportunities, generate growth plans, and build implementation prompts.

Analyze Command

The analyze command scans a codebase to detect its technology stack, identify existing growth features, surface growth opportunities, and flag revenue leakage -- all powered by an LLM of your choice.

Prerequisites

Before running analyze, you need one of the following:

  • An API key configured for a cloud LLM provider (OpenAI, Gemini, Anthropic). See configuration for setup instructions.
  • A local LLM server running (LM Studio, Ollama). No API key required.

Basic usage

Analyze the current directory:

uvx skene-growth analyze .

Analyze a specific project path:

uvx skene-growth analyze ./my-project

Save the manifest to a custom location:

uvx skene-growth analyze . -o ./output/manifest.json

If no API key is configured and you are not using a local provider, the command displays a sample growth analysis preview instead of running a full LLM-powered analysis.

Flag reference

FlagShortDescription
PATHPath to codebase (default: ., must be an existing directory)
--output PATH-oOutput path for growth-manifest.json
--api-key TEXTAPI key for LLM provider (or SKENE_API_KEY env var)
--provider TEXT-pLLM provider: openai, gemini, anthropic/claude, lmstudio, ollama, generic (aliases: openai-compatible, openai_compatible)
--model TEXT-mModel name (e.g., gpt-4o, gemini-3-flash-preview, claude-sonnet-4-5)
--base-url TEXTBase URL for OpenAI-compatible endpoint (required for generic provider; also SKENE_BASE_URL env var)
--business-type TEXT-bBusiness type for growth template (e.g., design-agency, b2b-saas). LLM infers if not provided.
--product-docsGenerate product-docs.md with user-facing feature documentation (creates v2.0 manifest)
--exclude TEXT-eFolder names to exclude from analysis (repeatable). Also configurable in .skene-growth.config as exclude_folders.
--verbose-vEnable verbose output
--debugLog all LLM input/output to .skene-growth/debug/

Output files

By default, output files are saved to ./skene-context/. You can override this with the -o flag. If the -o path points to a directory or has no file extension, the tool appends growth-manifest.json automatically.

growth-manifest.json

The primary output. Contains the full analysis results as structured JSON:

FieldDescription
versionSchema version ("1.0" or "2.0" with --product-docs)
project_nameName of the analyzed project
descriptionBrief project description
tech_stackDetected stack: framework, language, database, auth, deployment, package_manager, services
industryInferred industry vertical with primary, secondary tags, confidence score, and evidence
current_growth_featuresFeatures with growth potential, including file paths, detected intent, confidence scores, and growth suggestions
growth_opportunitiesMissing features that could drive growth, with priority levels
revenue_leakageRevenue leakage issues with impact assessment and recommendations
generated_atISO 8601 timestamp of when the manifest was generated

growth-template.json

A custom PLG growth template generated alongside the manifest. Contains lifecycle stages tailored to your project's business type and industry. This template is used by the plan command to generate actionable growth plans.

product-docs.md (only with --product-docs)

A Markdown file containing user-facing product documentation generated from your codebase. See the Product docs mode section below.

Business type

The --business-type flag tells the LLM what kind of business your project serves, which shapes the growth template it generates. Examples:

# SaaS product
uvx skene-growth analyze . -b b2b-saas

# Design or creative agency
uvx skene-growth analyze . -b design-agency

# E-commerce platform
uvx skene-growth analyze . -b ecommerce

If you omit --business-type, the LLM infers it from your codebase. In most cases, the inference is accurate enough. Use the flag when the LLM's guess is wrong or when you want to see growth recommendations tailored to a specific business model.

The business type affects the growth-template.json output -- different business types produce different lifecycle stages, growth loops, and strategic recommendations.

Product docs mode

The --product-docs flag enables an extended analysis that generates user-facing documentation from your codebase:

uvx skene-growth analyze . --product-docs

When enabled, the command:

  1. Collects product overview information (tagline, value proposition, target audience)
  2. Identifies user-facing features with descriptions, usage examples, and categories
  3. Generates product-docs.md in the output directory
  4. Produces a v2.0 manifest (extends the standard manifest with product_overview and features fields)

The v2.0 manifest is a superset of v1.0 -- all standard fields remain present.

Excluding folders

Use --exclude (or -e) to skip folders during analysis. This is useful for large repositories where you want to ignore generated code, vendored dependencies, or test fixtures.

The flag can be used multiple times:

uvx skene-growth analyze . --exclude node_modules --exclude dist --exclude .next

You can also set exclusions permanently in your config file. In .skene-growth.config:

exclude_folders = ["node_modules", "dist", ".next", "vendor", "__pycache__"]

CLI exclusions and config exclusions are merged (deduplicated). CLI flags do not override config -- they add to the list.

Debug mode

The --debug flag logs all LLM input and output to .skene-growth/debug/:

uvx skene-growth analyze . --debug

This is useful for:

  • Troubleshooting unexpected analysis results
  • Inspecting the prompts sent to the LLM
  • Verifying that the LLM is receiving the right context

You can also enable debug mode permanently in your config file:

debug = true

Or via environment variable:

export SKENE_DEBUG=true

Provider-specific examples

OpenAI

uvx skene-growth analyze . --provider openai --api-key "sk-..." --model gpt-4o

Google Gemini

uvx skene-growth analyze . --provider gemini --api-key "AI..." --model gemini-3-flash-preview

Anthropic

uvx skene-growth analyze . --provider anthropic --api-key "sk-ant-..." --model claude-sonnet-4-5

Ollama (local)

# Start Ollama first, then:
uvx skene-growth analyze . --provider ollama --model llama3.3

No API key required. The default model for Ollama is llama3.3.

LM Studio (local)

# Start LM Studio server first, then:
uvx skene-growth analyze . --provider lmstudio

No API key required.

Generic / OpenAI-compatible

For any OpenAI-compatible API endpoint:

uvx skene-growth analyze . \
  --provider generic \
  --base-url "http://localhost:8080/v1" \
  --model my-model

The --base-url flag is required when using the generic or openai-compatible provider. You can also set it via the SKENE_BASE_URL environment variable.

What happens without an API key

If no API key is configured and you are not using a local provider (lmstudio, ollama, generic), the command falls back to a sample report -- a preview of the analysis output structure using representative data. This lets you see what a full analysis looks like before committing to an API key.

To run the full analysis, provide an API key via any of these methods:

  1. --api-key flag
  2. SKENE_API_KEY environment variable
  3. api_key field in .skene-growth.config or ~/.config/skene-growth/config

Next steps

  • Plan -- Generate a growth plan from your manifest using the Council of Growth Engineers
  • Configuration -- Set up persistent config so you do not need to pass flags every time