Skip to content

Living Knowledge Base

Every AI coding session generates insights — architecture decisions, bug workarounds, dependency quirks — that get lost when the session ends. The Living Knowledge Base captures these into structured entries, compiles them into a project wiki, and feeds them back into future sessions.

Sessions → Extract (evidence) → Promote (claim) → Compile → Wiki → Agents read & write back
  1. AI agents discover facts during coding sessions
  2. Facts are saved as knowledge entries (via MCP tools or CLI)
  3. Each entry is classified as evidence (raw, from sync), claim (promoted active truth), or note (rejected/dismissed)
  4. Compilation auto-promotes qualifying evidence to claim, merges active claims into the project context document
  5. Section pages and concept articles are auto-generated from active claims only
  6. Future agents read the wiki via get_project_context and contribute back

The loop is self-reinforcing: the more sessions you run, the richer the knowledge base becomes.

Every knowledge entry has a claim_class that controls its lifecycle:

ClassMeaningWhen
evidenceRaw fact extracted from a session or submitted by an agent that didn’t meet promotion gatesDefault for sync-extracted entries and agent add_knowledge calls
claimPromoted active truthEvidence auto-promoted at compile (confidence ≥ 0.5, content ≥ 30 chars), or agent entries that pass specificity + dedup + rate-limit gates
noteRejected or dismissed — audit-visible but excluded from compileManual dismiss, or agent submissions that failed all gates

Freshness decay. Each type has its own decay window applied from last_relevant_at:

TypeWindow before stale
bug30 days
dependency60 days
pattern, discovery90 days
convention180 days
decision365 days

Past the window, freshness_class moves currentstalearchived. Stale entries are surfaced in the dashboard review queue; only current active claims feed compile by default.

Supersession. Retire an old claim in favor of a new one with PUT /api/v1/projects/{id}/entries/{entry_id}/supersede. The old entry remains readable for audit but is excluded from compile.

Refresh. Extend an entry’s validity window with PUT /api/v1/projects/{id}/entries/{entry_id}/refresh — updates last_relevant_at and resets freshness_class to current.

Rebuild. POST /api/v1/projects/{id}/rebuild (or sfs project rebuild) resets compiled_at=NULL on all active claims and clears context_document, forcing a full re-compile on settled projects.

Terminal window
# Initialize a project (auto-detects git remote)
sfs project init
# Add initial context in your editor
sfs project edit
# Enable auto-narrative (generates summaries on sync)
sfs project set --auto-narrative
# Run your first compilation
sfs project compile

After init, the project is linked to your git remote. Every team member with access sees the same knowledge base.

Each entry has a type, content, confidence score (0.0-1.0), and an optional link to the session that created it.

TypeUse ForExample
decisionArchitecture and design choices”WebSockets rejected in favor of HTTP + ETags”
patternCode patterns and conventions”All converters follow parse → canonical → write”
bugKnown issues and workarounds”SQLite WAL mode can corrupt on power loss”
conventionCoding standards”All API routes use async handlers”
dependencyExternal library notes”react-markdown renders project context, not a custom parser”
discoveryGeneral findings”The auth middleware resolves tier from org, not user”

Entries start as pending. After compilation, they become compiled. Irrelevant entries can be dismissed:

Terminal window
# List all entries
sfs project entries
# Filter by type or status
sfs project entries --type decision
sfs project entries --pending
# Dismiss a noisy entry
sfs project dismiss 42

Entries with confidence below 0.5 are marked as (unverified) in the compiled output.

Compilation merges pending entries into the project context document and generates section wiki pages.

Compile is human-driven — there is no automatic background scheduler. Pending claims sit in the queue until somebody compiles. Three places to trigger a compile pass:

  • Dashboard. A “Compile now” CTA appears in the project page workflow hint banner whenever pending count > 0, plus the prominent buttons on the Entries and Context tabs.
  • Agent (MCP). Agents are instructed to check get_knowledge_health after writing knowledge, surface the pending count to the user, and call compile_knowledge_base only on explicit consent.
  • CLI. sfs project compile from your terminal.

Concurrent compile callers serialize on a project-row lock — duplicate context documents and duplicate concept pages can’t be created by races.

When no LLM API key is provided, compilation uses a rule-based approach:

  • Groups entries by type under structured section headings (## Key Decisions, ## Patterns & Conventions, etc.)
  • Appends new entries to the appropriate section
  • Marks low-confidence entries as unverified
  • Adds a ## Recent Changes section from the current batch

When an LLM API key is available, the compiler sends the current context plus grouped entries to the model. The LLM deduplicates, reorganizes, and produces a cleaner merged document. Falls back to deterministic compile on failure.

  1. Fetches all pending (not dismissed, not yet compiled) entries
  2. Groups them by entry type
  3. Merges into the existing context document
  4. Marks all processed entries as compiled
  5. Saves a before/after snapshot in the compilation history
  6. Creates or updates section pages in the wiki (one per entry type)
  7. Auto-generates concept articles for recurring topics
Terminal window
sfs project compile
# Compiling pending entries...
# Project context updated. 12 entries compiled.

Each entry type gets its own wiki page, auto-generated from all non-dismissed entries of that type:

Entry TypeWiki Page Slug
decisionkey-decisions
patternpatterns
conventioncoding-conventions
bugknown-issues
dependencydependencies
discoverydiscoveries

The wiki is a collection of markdown pages organized by slug. Pages can be manually written or auto-generated from knowledge entries.

Terminal window
# List all wiki pages
sfs project pages
# Slug Title Type Words Entries Auto
# key-decisions Key Decisions section 340 18 yes
# patterns Patterns section 210 12 yes
# architecture Architecture Overview section 890 0 no
# Read a specific page
sfs project page key-decisions

After compilation, the system identifies recurring topics across entries and generates concept articles. These are standalone wiki pages that synthesize related entries into a coherent narrative.

Auto-generated pages can be regenerated from the latest entries:

Terminal window
sfs project regenerate key-decisions
# Article regenerated (340 words from 18 entries).

Only auto-generated pages support regeneration. Manually written pages are left untouched.

Pages track which entries and other pages reference them. View backlinks at the bottom of any page:

Terminal window
sfs project page architecture
# [content rendered in panel]
#
# Backlinks:
# entry:42 (references)
# entry:67 (references)
CommandDescription
sfs project initInitialize project context for the current repo
sfs project compileCompile pending entries into the context document
sfs project rebuildForce a full re-compile (reset compiled_at=NULL + clear context document)
sfs project entriesList knowledge entries (supports --pending, --type, --limit)
sfs project ask "question"Search the knowledge base and show matching entries
sfs project pagesList all wiki pages
sfs project page <slug>View a wiki page with backlinks
sfs project healthRun health checks — pending count, staleness, score
sfs project regenerate <slug>Regenerate an auto-generated concept page
sfs project set --auto-narrativeEnable auto-narrative summaries on sync
sfs project dismiss <id>Dismiss an irrelevant entry
sfs project editEdit the context document in $EDITOR
sfs project showDisplay the current context document

The health command scores your knowledge base and flags issues:

Terminal window
sfs project health
# Project Health: my-project
#
# ✓ Context document exists (1200 words, 8 sections)
# ✓ 45 knowledge entries (38 compiled, 2 dismissed)
# ⚠ 5 entries pending compilation
# ✓ Last compiled: 2026-04-01
# ✓ Context appears up to date
#
# Health Score: 90%

The SessionFS MCP server exposes five knowledge-related tools that AI agents call during sessions.

ToolDescription
add_knowledge(content, entry_type)Add a knowledge entry. Optional: session_id, confidence (0.0-1.0)
update_wiki_page(slug, content)Create or update a wiki page. Optional: title
list_wiki_pages()List all wiki pages with slugs, titles, and word counts
ToolDescription
search_project_knowledge(query)Search entries by content. Optional: entry_type, limit
ask_project(question)Research a question across context, entries, and local sessions

When an agent calls get_project_context, the response includes contribution instructions. Agents are expected to call add_knowledge() when they discover something significant:

# Example: agent discovers a pattern during a session
add_knowledge("All CLI commands use the @handle_errors decorator for consistent error handling", "pattern")

The entry is stored as pending until the next compilation cycle.

All endpoints are under /api/v1/projects/{project_id} and require authentication.

MethodPathDescription
GET/entriesList entries. Query params: type, pending, search, limit
GET/entries/searchSearch active claims by default. Pass include_stale=true to include evidence, notes, and superseded entries
POST/entries/addAdd a single entry. Body: content, entry_type, confidence, session_id, claim_class (default: note)
PUT/entries/{entry_id}Dismiss or un-dismiss an entry
PUT/entries/{entry_id}/refreshReset last_relevant_at and freshness_class=current. Replaces the old “Still valid” no-op
PUT/entries/{entry_id}/supersedeRetire an old claim and link it to a superseding entry. Body: superseded_by_id, reason
POST/entries/dismiss-staleBulk-dismiss low-confidence stale entries (confidence < 0.5 + > 90 days unreferenced)
POST/compileCompile pending entries. Auto-promotes evidence ≥ 0.5 confidence to claim before merging
POST/rebuildReset compiled_at=NULL on all active claims and clear context_document to force a full re-compile
GET/compilationsList compilation history
GET/healthHealth status: entry counts, staleness, word count, score inputs, stale review queue
GET/pagesList wiki pages
GET/pages/{slug}Get page content with backlinks
PUT/pages/{slug}Create or update a page. Body: content, title
DELETE/pages/{slug}Delete a wiki page
POST/pages/{slug}/regenerateRegenerate an auto-generated page
PUT/settingsUpdate project settings (e.g., auto_narrative, kb_max_context_words — default 2000)

When to compile. Run sfs project compile after batches of sessions, not after every single entry. A good cadence is weekly or after a feature sprint. The health command tells you when pending entries are piling up.

Seed the knowledge base early. Before your first AI session, add a few decision and convention entries manually via the API or MCP tools. This gives agents immediate context and sets the tone for the kind of knowledge you want captured.

Dismiss noise aggressively. Not every discovery is worth keeping. Dismiss trivial entries (sfs project dismiss <id>) so they don’t clutter compiled output. The confidence score helps — entries below 0.5 are flagged as unverified.

Use wiki pages for depth. Knowledge entries are single facts. For topics that need explanation — architecture, deployment, onboarding — create wiki pages via update_wiki_page() or the dashboard editor.

Monitor health regularly. A healthy project has a score above 80%, no stale context, and fewer than 10 pending entries. Run sfs project health in CI or as a periodic check.

Let auto-narrative do its job. With --auto-narrative enabled, session summaries are extracted on sync and stored as entries automatically. This is the lowest-effort way to build the knowledge base — just use your AI tools normally.