Skene
CLI docs

skene CLI documentation

Analyze codebases for product-led growth opportunities, generate growth plans, and build implementation prompts.

Python API

Programmatic access to skene's codebase analysis, manifest generation, and documentation tools.

Quick example

import asyncio
from pathlib import Path
from pydantic import SecretStr
from skene import CodebaseExplorer, ManifestAnalyzer
from skene.llm import create_llm_client

async def main():
    codebase = CodebaseExplorer(Path("/path/to/repo"))
    llm = create_llm_client(
        provider="openai",
        api_key=SecretStr("your-api-key"),
        model="gpt-4o",
    )

    analyzer = ManifestAnalyzer()
    result = await analyzer.run(
        codebase=codebase,
        llm=llm,
        request="Analyze this codebase for growth opportunities",
    )

    manifest = result.data["output"]
    print(manifest["tech_stack"])
    print(manifest["current_growth_features"])

asyncio.run(main())

CodebaseExplorer

Safe, sandboxed access to codebase files. Automatically excludes common build/cache directories.

from pathlib import Path
from skene import CodebaseExplorer, DEFAULT_EXCLUDE_FOLDERS

# Create with default exclusions
explorer = CodebaseExplorer(Path("/path/to/repo"))

# Create with custom exclusions (merged with defaults)
explorer = CodebaseExplorer(
    Path("/path/to/repo"),
    exclude_folders=["tests", "vendor", "migrations"]
)

Methods

MethodReturnsDescription
await get_directory_tree(start_path, max_depth)dictDirectory tree with file counts
await search_files(start_path, pattern)dictFiles matching glob pattern
await read_file(file_path)strFile contents
await read_multiple_files(file_paths)dictMultiple file contents
should_exclude(path)boolCheck if a path should be excluded
  • build_directory_tree — Standalone function for building directory trees
  • DEFAULT_EXCLUDE_FOLDERS — List of default excluded folder names

Analyzers

ManifestAnalyzer

Runs a full codebase analysis and produces a growth manifest.

from skene import ManifestAnalyzer

analyzer = ManifestAnalyzer()
result = await analyzer.run(
    codebase=codebase,
    llm=llm,
    request="Analyze this codebase for growth opportunities",
)

manifest = result.data["output"]

TechStackAnalyzer

Detects the technology stack of a codebase.

from skene import TechStackAnalyzer

analyzer = TechStackAnalyzer()
result = await analyzer.run(codebase=codebase, llm=llm)
tech_stack = result.data["output"]

GrowthFeaturesAnalyzer

Identifies existing growth features in a codebase.

from skene import GrowthFeaturesAnalyzer

analyzer = GrowthFeaturesAnalyzer()
result = await analyzer.run(codebase=codebase, llm=llm)
features = result.data["output"]

Configuration

from skene import Config, load_config

# Load config from files + env vars
config = load_config()

# Access properties
config.api_key       # str | None
config.provider      # str (default: "openai")
config.model         # str (auto-determined if not set)
config.output_dir    # str (default: "./skene-context")
config.verbose       # bool (default: False)
config.debug         # bool (default: False)
config.exclude_folders  # list[str] (default: [])
config.base_url      # str | None
config.upstream      # str | None (upstream workspace URL)

# Get/set arbitrary keys
config.get("api_key", default=None)
config.set("provider", "gemini")

Upstream credentials

from skene.config import (
    save_upstream_to_config,    # Save upstream URL, workspace, API key to .skene.config
    remove_upstream_from_config,# Remove upstream credentials from .skene.config
    resolve_upstream_token,     # Resolve token from env/config
)

LLM Client

from pydantic import SecretStr
from skene.llm import create_llm_client, LLMClient

client: LLMClient = create_llm_client(
    provider="openai",          # openai, gemini, anthropic, ollama, lmstudio, generic
    api_key=SecretStr("key"),
    model="gpt-4o",
    base_url=None,              # Required for generic provider
    debug=False,                # Log LLM I/O to .skene/debug/
)

Manifest schemas

All schemas are Pydantic v2 models. See Manifest schema reference for full field details.

from skene import (
    GrowthManifest,     # v1.0 manifest
    DocsManifest,       # v2.0 manifest (extends GrowthManifest)
    TechStack,
    GrowthFeature,
    GrowthOpportunity,
    IndustryInfo,
    ProductOverview,    # v2.0 only
    Feature,            # v2.0 only
)

GrowthManifest fields

FieldType
versionstr ("1.0")
project_namestr
descriptionstr | None
tech_stackTechStack
industryIndustryInfo | None
current_growth_featureslist[GrowthFeature]
growth_opportunitieslist[GrowthOpportunity]
revenue_leakagelist[RevenueLeakage]
generated_atdatetime

DocsManifest additional fields

FieldType
versionstr ("2.0")
product_overviewProductOverview | None
featureslist[Feature]

Feature registry

from skene.feature_registry import (
    load_feature_registry,              # Load registry from disk
    write_feature_registry,             # Write registry to disk
    merge_features_into_registry,       # Merge new features with existing registry
    merge_registry_and_enrich_manifest, # Full registry + manifest enrichment pipeline
    load_features_for_build,            # Load active features for build command
    export_registry_to_format,          # Export to json, csv, or markdown
    derive_feature_id,                  # Convert feature name to snake_case ID
    compute_loop_ids_by_feature,        # Map feature_id -> list of loop_ids
)

Key functions

FunctionDescription
merge_features_into_registry(new_features, registry)Merges new features: adds new, updates matched, archives missing
merge_registry_and_enrich_manifest(manifest, context_dir)Full pipeline: loads loops, maps to features, writes registry, enriches manifest
load_features_for_build(context_dir)Returns active features list for the build command
export_registry_to_format(registry, format)Exports to "json", "csv", or "markdown"

Growth loops

from skene.growth_loops.storage import (
    load_existing_growth_loops,         # Load all loop JSONs from growth-loops/
    write_growth_loop_json,             # Write a loop JSON to disk
    generate_loop_definition_with_llm,  # Generate loop definition via LLM
    derive_loop_id,                     # Derive loop_id from name
    derive_loop_name,                   # Derive name from technical execution
)

from skene.growth_loops.push import (
    ensure_base_schema_migration,       # Create base schema migration
    build_loops_to_supabase,            # Build Supabase migrations from loops
    build_migration_sql,                # Generate migration SQL
    write_migration,                    # Write migration file
    push_to_upstream,                   # Push to upstream API
)

from skene.growth_loops.upstream import (
    validate_token,                     # Validate token via upstream API
    build_package,                      # Assemble deployment package
    build_push_manifest,                # Create push manifest with checksum
    push_to_upstream,                   # POST package to /api/v1/deploys
)

Plan decline

from skene.planner.decline import (
    decline_plan,           # Archive a declined plan with executive summary only
    load_declined_plans,    # Load recent declined plans for reference
)

Documentation generation

from skene import DocsGenerator, GrowthManifest

manifest = GrowthManifest.model_validate_json(open("growth-manifest.json").read())

generator = DocsGenerator()
context_doc = generator.generate_context_doc(manifest)
product_doc = generator.generate_product_docs(manifest)

The PSEOBuilder class generates programmatic SEO content from manifests.

Strategy framework

The analysis pipeline is built on a composable strategy framework:

from skene.strategies import (
    AnalysisStrategy,    # Base strategy class
    AnalysisResult,      # Result container with data + metadata
    AnalysisMetadata,    # Timing, token usage, step info
    AnalysisContext,     # Shared context between steps
    MultiStepStrategy,   # Chains multiple steps together
)

from skene.strategies.steps import (
    AnalysisStep,        # Base step class
    SelectFilesStep,     # Select relevant files for analysis
    ReadFilesStep,       # Read file contents
    AnalyzeStep,         # Send to LLM for analysis
    GenerateStep,        # Generate structured output
)

These classes are primarily used internally by the analyzers but can be composed for custom analysis pipelines.

Planner

from skene.planner import Planner
from skene.planner.schema import GrowthPlan, TechnicalExecution, PlanSection

The Planner class generates growth plans from manifests and templates. It is used internally by the plan CLI command.

GrowthPlan schema

FieldTypeDescription
executive_summarystrHigh-level summary focused on first-time activation
sectionslist[PlanSection]Numbered memo sections (1-6)
technical_executionTechnicalExecutionSection 7: Technical Execution
memostrSection 8: The closing confidential engineering memo

TechnicalExecution fields

FieldTypeDescription
next_buildstrWhat activation loop to build next
confidencestrConfidence level, e.g. "85%"
exact_logicstrSpecific flow changes for first-action completion
data_triggersstrEvents indicating first meaningful action
stack_stepsstrTools, scripts, or structural changes required
sequencestrNow / Next / Later priorities

PlanSection fields

FieldTypeDescription
titlestrSection heading, e.g. "The Next Action"
contentstrFree-form markdown content

Helper functions

  • render_plan_to_markdown(plan, project_name, generated_at) — Render a GrowthPlan to the council memo markdown format
  • parse_plan_json(response) — Parse an LLM response (with optional code fences) into a validated GrowthPlan