Skip to content

Context Systems

AI models have no memory between API calls — every scene generation is a completely fresh request. Without help, the AI would forget character locations, plot developments, emotional states, and everything else that happened in your story so far.

Storywright solves this with three complementary context systems that feed information from previous scenes into each new generation. Together, they give the AI a "working memory" of your story, so scene 10 feels like it was written by someone who read scenes 1–9.


The Three Systems

Storywright uses three context layers, each serving a different purpose:

System What It Captures How It Helps
Scene Summaries Narrative overview of each scene Gives the AI the story arc
Memories Topic-tagged facts with semantic recall Surfaces specific relevant details
Continuity Ledger Structured entity state (location, appearance, etc.) Prevents logical errors

Scene Summaries

What They Are

A 2–3 sentence recap of each completed scene, capturing the key events, decisions, and turning points.

How They Work

  1. After a scene is generated, the extraction model automatically produces a summary.
  2. When you generate the next scene, Storywright includes summaries of recent scenes as a "rolling window" in the prompt.
  3. The AI reads these summaries to understand where the story is before writing the next scene.

Why Summaries Instead of Full Text?

Including every previous scene's full text would quickly exceed token limits. A 10-scene story with 2,000-word scenes would be 20,000 words of prior context alone — far too much. Summaries compress each scene's key events into a few sentences, keeping the context window manageable.

Viewing

Summaries are stored per-scene and used internally during generation. They are not displayed in a separate panel — they work behind the scenes.


Memories

What They Are

Topic-tagged fact blocks extracted from generated scenes. Each memory captures a specific piece of information with a topic label:

  • "Character revelation: Elena discovered the hidden passage behind the bookshelf"
  • "Plot development: The council voted to close the border"
  • "World detail: The forest glows blue at night due to bioluminescent fungi"

How They're Extracted

After each scene is generated, the extraction model identifies important facts, tags them by topic, and generates vector embeddings for each one.

How They're Recalled

Before generating a new scene, Storywright uses hybrid recall combining two signals:

  • Keyword matching (always active): Memories whose topic or text overlaps with the current scene's context are scored by keyword relevance.
  • Semantic similarity (optional): If embeddings are enabled, cosine similarity between memory vectors and the current scene's context is used to find semantically relevant memories — even without keyword overlap.

The two scores are combined, and the top-N most relevant memories are included in the prompt.

Embedding Modes

Configure in Settings → Memory:

Mode How It Works When to Use
Disabled (default) Keyword-only recall. No model needed. Getting started, low cost
On-Device ONNX model (MiniLM-L6-v2) runs locally. No API calls. Offline use, privacy
API Calls your configured embedding API (e.g. text-embedding-3-small) Best accuracy

On-device embedding is not available on web.

Memory Stats

The Memory settings page shows embedding statistics: - "X / Y memories embedded (Z%)" — how many memories have vector embeddings - Status indicator for the current embedding mode - Batch Embed button to generate embeddings for all un-embedded memories

Viewing

Open the Memory Panel in the story workspace left sidebar. Memories are grouped by scene index, showing the topic tag and fact text for each entry. A count badge shows the total number of memories extracted so far.


Continuity Ledger

What It Is

Structured state tracking for every entity — characters, objects, locations — across your entire story. The ledger tracks 10 categories of state, giving the AI a precise, up-to-date picture of your story world.

How It Works

  1. After each scene, the extraction model identifies state changes as "deltas" — what changed during that scene.
  2. These deltas update the ledger, building a cumulative view of every entity's current state.
  3. The full current state of all entities is included in every scene's prompt.

The 10 State Categories

# Category What It Tracks Example
1 Location Where they are "forest cabin", "city rooftop"
2 Appearance How they look "wearing a torn red dress", "bloodied knuckles"
3 Possession What they have "carrying a rusted key", "lost the map"
4 Knowledge What they know "knows about the betrayal", "unaware of the trap"
5 Relationship How they relate to others "growing trust with Marco", "hostile toward the council"
6 Status Life state "alive", "unconscious", "poisoned"
7 Activity What they're doing "investigating the crime scene", "sleeping"
8 Posture Body language "arms crossed defensively", "leaning against the wall"
9 Physical Physical condition "broken arm", "exhausted", "well-rested"
10 Emotional Emotional state "anxious", "determined", "grieving"

Viewing

Open the Continuity Panel in the story workspace left sidebar. Entities are grouped by name, with state entries color-coded by category (10 colors). You can see at a glance where every character is, what they're carrying, how they feel, and more.


How They Work Together

During generation, Storywright assembles the prompt with all three context layers:

  1. Summaries"Here's what happened so far" — narrative overview
  2. Memories"Here are specific relevant facts" — semantic precision
  3. Continuity"Here's the current state of the world" — structured ground truth

This layered approach means:

  • Summaries give the AI the story arc — the big picture of where the plot has been and where it's going.
  • Memories provide specific details that matter right now — a character's secret, a promise made three scenes ago, a world-building detail.
  • Continuity prevents logical errors — a character can't use a sword they lost two scenes ago, can't be in the tavern if they just left for the mountain pass.

No single system could do it all. Summaries are too compressed for specific facts. Memories are too fragmented for narrative flow. Continuity is too structured for storytelling nuance. Together, they cover all three needs.


Viewing Context Panels

In the story workspace, the left sidebar has separate collapsible sections for Memory and Continuity:

  • Memory — All extracted memories grouped by scene. Click a scene group to expand and see individual memories. The badge shows the total count.
  • Continuity — All entities with their current state. Color-coded by category (10 colors), grouped by entity name.

Each section has its own expand/collapse state — you can have Memory open and Continuity closed, or vice versa. These panels are read-only views — they reflect what the AI will use as context during generation.


When Context Updates

After Generation

All three systems extract automatically after each scene is generated. No action needed from you.

After Revision

When the AI revises a scene, it re-extracts summaries, memories, and continuity from the new text. The old extractions are replaced.

After Manual Edits

If you manually edit a scene's text, the extracted continuity for that scene may become stale — the text changed but the extractions didn't. The app notes this, and you can re-extract if needed.

After Scene Deletion

Removing a scene removes its memories and continuity contributions. The ledger recalculates based on the remaining scenes.


Tips

  • Context is automatic — you don't need to manage these systems manually. They extract and update behind the scenes.
  • Check Continuity when generation seems off — if the AI makes a logical error (wrong location, forgotten injury), open the Continuity Panel. The extraction model may have missed a state change. You can edit or re-extract.
  • More scenes = richer context — the first scene has minimal context; by scene 5–10, the AI has a deep understanding of your story world.
  • Use lorebooks for must-not-forget facts — memories use probabilistic similarity recall, so there's a small chance a specific fact won't surface. If something is critical to your story (a magic system rule, a character's full backstory), add it as a lorebook entry with keyword triggers for deterministic inclusion.

  • Generation — the full generation pipeline and how context feeds into it
  • Lorebooks — the fourth context layer (keyword-triggered, deterministic)
  • Stories — using the story workspace and its panels