Skene
CLI docs

skene CLI documentation

Analyze codebases for product-led growth opportunities, generate growth plans, and build implementation prompts.

Troubleshooting

Solutions for common issues when using skene.

LM Studio

Context length error

Error code: 400 - {'error': 'The number of tokens to keep from the initial prompt is greater than the context length...'}

The model's context length is too small for the analysis. To fix:

  1. In LM Studio, unload the current model
  2. Go to Developer > Load
  3. Click on Context Length: Model supports up to N tokens
  4. Set it to the maximum supported value
  5. Reload to apply changes

Reference: lmstudio-ai/lmstudio-bug-tracker#237

Connection refused

Ensure:

  • LM Studio is running
  • A model is loaded and ready
  • The server is running on the default port (http://localhost:1234)

For a custom port:

export LMSTUDIO_BASE_URL="http://localhost:8080/v1"

Ollama

Connection refused

Ensure:

  • Ollama is running (ollama serve)
  • A model is pulled and available (ollama list)
  • The server is on the default port (http://localhost:11434)

Getting started with Ollama:

# Pull a model
ollama pull llama3.3

# Start the server (usually runs automatically after install)
ollama serve

For a custom port:

export OLLAMA_BASE_URL="http://localhost:8080/v1"

API key issues

"No API key" or fallback to sample report

If analyze runs without an API key, it falls back to showing a sample preview. Set your key using one of:

# CLI flag
uvx skene analyze . --api-key "your-key"

# Environment variable
export SKENE_API_KEY="your-key"

# Config file (interactive)
uvx skene config

Wrong provider for API key

Make sure the API key matches the provider. An OpenAI key won't work with --provider gemini.

Provider issues

Unknown provider

Valid provider names:

  • openai
  • gemini
  • anthropic or claude
  • lmstudio, lm-studio, or lm_studio
  • ollama
  • generic, openai-compatible, or openai_compatible

Generic provider: missing base URL

The generic provider requires a base URL:

uvx skene analyze . --provider generic --base-url "http://localhost:8000/v1" --model "your-model"

Or set via environment variable:

export SKENE_BASE_URL="http://localhost:8000/v1"

File not found errors

Manifest not found (plan/build commands)

The plan and build commands look for files in ./skene-context/ by default. Make sure you've run analyze first:

uvx skene analyze .   # Creates ./skene-context/growth-manifest.json
uvx skene plan        # Reads from ./skene-context/

Or specify paths explicitly:

uvx skene plan --manifest ./path/to/manifest.json --template ./path/to/template.json
uvx skene plan --context ./my-output-dir

Growth plan not found (build command)

uvx skene plan    # Creates ./skene-context/growth-plan.md
uvx skene build   # Reads from ./skene-context/

# Or specify explicitly
uvx skene build --plan ./path/to/growth-plan.md

Rate limit errors

When a provider returns a rate limit error, skene silently falls back to a cheaper model. This keeps the workflow moving but means the output was generated by a different model than configured.

If you need output from a specific model (e.g. during benchmarking), use --no-fallback:

uvx skene analyze . --no-fallback

With --no-fallback, the CLI retries the same model with exponential backoff. If all 3 retries are exhausted, the command raises an error instead of switching models.

Push / upstream issues

"No token" error

If push says "No token", you need to authenticate first:

uvx skene login --upstream https://skene.ai/workspace/my-app

Or set the token via environment variable:

export SKENE_UPSTREAM_API_KEY="your-token"

"No growth loops with Supabase telemetry found"

The push command requires growth loops that include telemetry items with type: "supabase". Make sure you have run build first:

uvx skene build

Growth loop files are stored in skene-context/growth-loops/. Check that at least one loop has a requirements.telemetry entry with "type": "supabase".

Push authentication failed (401/403)

Your token may have expired or be invalid. Log out and log in again:

uvx skene logout
uvx skene login --upstream https://skene.ai/workspace/my-app

Base schema migration missing

If push fails because the base schema is missing, run init first:

uvx skene init

Then apply the migration with supabase db push.

Debug mode

Use --debug on any command to log all LLM input and output to .skene/debug/:

uvx skene analyze . --debug
uvx skene plan --debug
uvx skene chat --debug

Debug mode can also be enabled via environment variable or config:

export SKENE_DEBUG=true
# .skene.config
debug = true

The debug logs show the full prompts sent to the LLM and the complete responses, which is useful for diagnosing unexpected output or provider-specific issues.

Getting help