Skip to content

Troubleshooting & FAQ

Having trouble? This guide covers the most common issues and frequently asked questions.


Common Issues

"Connection failed" or "API error"

  • Check your API key. Go to Settings → Providers. Make sure the key is entered correctly with no extra spaces.
  • Check the base URL. The default is https://nano-gpt.com/api/v1. If you're using another provider, confirm you have the correct endpoint.
  • Test the connection. Click "Test Connection" in Settings. The error message will tell you exactly what went wrong.
  • Network issues. Make sure you have internet access. If you're behind a firewall or VPN, the API endpoint may be blocked.
  • Key expired or invalid. Check your API provider's dashboard to confirm the key is active and has remaining credits.

Generation produces poor or repetitive results

  • Improve your premise. A vague premise leads to generic output. Be specific about characters, setting, tone, and conflict.
  • Add creative direction. Use the Creative Direction field to steer tone — for example, "noir atmosphere, dry humor, unreliable narrator."
  • Check character cards. Incomplete character descriptions give the AI less to work with. Run Quality Inspection (shield icon) on your characters to find gaps.
  • Try a different writing style. Switch between literary-fiction, genre-fiction, etc. Each dramatically changes the AI's approach.
  • Try a better model. Higher-quality models (GPT-4, Claude 3.5 Sonnet) produce significantly better fiction than cheaper models.
  • Edit the plan first. If the scene outline is weak, the generated text will be too. Refine the plan before generating.

Lorebook entries aren't activating

  • Check keywords. Entries only trigger when their keywords appear in the current scene context (character names, scene outline, summaries). Make sure the keywords match exactly.
  • Check if the entry is enabled. Disabled entries (toggled off) won't trigger regardless of keywords.
  • Check entry scope. World entries require the story to use that world. Story entries require the story filter to be active. Character entries require that character to be assigned to the story.
  • Use broader keywords. If an entry for "Excalibur" isn't triggering, the keyword might not appear in the context. Try also adding the name of the character who wields it.

Characters seem inconsistent across scenes

  • Check the Continuity Panel. The AI might have extracted incorrect state. Look for wrong locations, appearances, or other details.
  • Revise the scene. Use revision to fix inconsistencies — for example, "Elena should still be at the library, not the rooftop."
  • Add lorebook entries. For critical facts that must always be maintained, add them as constant lorebook entries or use specific keywords to ensure they trigger.
  • Check memory recall. The Memory Panel shows what the AI "remembers." If a crucial fact isn't there, it may have been missed during extraction.

The app is slow

  • Check your model. Larger models (GPT-4) are slower than smaller ones. For drafting, consider switching to a faster model.
  • Check your API provider. Some providers have rate limits or slow response times during peak hours.
  • Large context. Many lorebook entries, long stories with lots of memories — all of this adds tokens to each request. Disable unused lorebook entries to reduce context size.
  • Post-processing. The three parallel extraction calls (summary, memory, continuity) after each scene generation take a few seconds. This is normal.

Can't see restricted content

  • Check your content level. Go to Settings → Content. Your active content level determines which tagged content is visible. Switch to a higher level (e.g., Mature or Unrestricted) to see restricted content.
  • Content is hidden, not deleted — changing your level makes it visible again immediately.

Import failed (character card or lorebook)

  • PNG cards: The file must be a PNG with embedded SillyTavern character data (a tEXt chunk with a chara or ccv3 key). Not all PNG images contain this data.
  • JSON cards: Must follow the SillyTavern V2 character card format.
  • Lorebook JSON: Must follow the SillyTavern World Info format with an entries object.
  • Encoding: Files must be UTF-8 encoded.

Web platform issues

  • Local models not connecting. Browsers block cross-origin requests to localhost. Enable CORS on your local server:
  • LM Studio — Developer tab → enable "Allow requests from any origin (CORS)"
  • Ollama — Set OLLAMA_ORIGINS=* before starting
  • Data not persisting. Web storage is browser-local. Your data won't sync across browsers or survive clearing browser data. Export your stories regularly.
  • Google Gemini 400 error. If you see "Multiple authentication credentials," make sure you're using the Google Gemini preset (not Custom or OpenAI) — it uses the correct x-goog-api-key header.

Scene text was lost

  • Check undo/redo. Use Cmd/Ctrl+Z to undo recent changes.
  • Version history. If you saved a snapshot before the change, you can restore it from version history.
  • Auto-save. The app auto-saves frequently, but manual saves (Cmd/Ctrl+S) give you certainty that your latest edits are persisted.

Frequently Asked Questions

How much does it cost?

Storywright itself is free. You pay for AI API usage through your API provider. Costs depend on:

  • Which models you use — premium models (Claude Opus, GPT-4.1) cost 5–20× more per token than fast models (Gemini Flash, GPT-4.1-mini)
  • How long your scenes are — longer scenes use more tokens
  • How many scenes you generate — including revisions and regenerations

A typical 10-scene story costs roughly $0.50–$2.00 with mid-range models.

Is my data sent anywhere?

  • Your story text, characters, and lorebooks are stored locally on your device.
  • AI prompts are sent to your configured API provider (nano-gpt, OpenAI, etc.) for generation.
  • Storywright has no backend server — there's nothing between you and the API.
  • Your API key is stored locally and only sent to the API endpoint you configured.

Can I use local/offline AI models?

Yes! Any OpenAI-compatible API works. For local models:

Provider Base URL
Ollama http://localhost:11434/v1
LM Studio http://localhost:1234/v1
vLLM / text-generation-webui Use their OpenAI-compatible endpoint

Note: Local models need sufficient VRAM and may produce lower-quality fiction than cloud models.

Can I use my own OpenAI key directly?

Yes. Set the base URL to https://api.openai.com/v1 and enter your OpenAI API key in Settings → Providers.

What models work best for fiction writing?

Use case Recommended models
Best quality GPT-4.1, Claude Sonnet 4, Gemini 2.5 Pro, Claude Opus 4
Good quality, lower cost GPT-4o, Gemini 2.0 Flash, Llama 3.3 70B (via NanoGPT)
Extraction (summaries, memories) GPT-4.1-nano, GPT-4.1-mini, Claude Haiku 3.5, Gemini 2.0 Flash

Can I export my stories?

Yes — you can export as Markdown, plain text, plan outline, or copy to clipboard. Find the export menu in the toolbar of the story workspace.

Can I collaborate with others?

Not currently. Storywright is a single-user local app. Stories are stored on your device only.

How do I back up my data?

Copy the storage directory for your platform:

  • macOS: ~/Documents/Storywright/
  • Windows: %USERPROFILE%\Documents\Storywright\

All your stories, characters, worlds, and settings are JSON files in this folder.

Debug logging: Enable verbose logging in Settings → Debug to see detailed LLM request/response logs. Log files are saved to your storage directory under logs/.