Features / Session Summary

Know what happened.
In 10 seconds.

Every session automatically summarized — files changed, tests run, commands executed, key decisions made. No more reading 2,000 messages to understand what the AI did.

Deterministic extraction runs instantly with no LLM cost. Parses pytest, jest, and go test output automatically.

sfs summary ses_abc
session summary
Debug auth middleware
2.3h · 327 msgs · 28 tool calls
Claude Code · feature/auth-fix @ a1b2c3d
Files
Modified: middleware.py, tokens.py, test_auth.py
Read: config.py, user.py + 3 more
Activity
Tests: 5 passed, 1 failed
Commands: 34 · Packages: pyjwt, redis
Outcome
Fix applied. 5/6 tests passing.

How summaries are built

No LLM needed

Deterministic extraction parses tool calls directly. Files, commands, test results, packages, and errors — extracted in milliseconds, zero API cost.

Test framework support

Parses pytest ("5 passed, 1 failed"), jest ("Tests: 10 passed"), and go test ("ok / FAIL") output automatically — no configuration required.

Dashboard + CLI

Summary tab in the web dashboard with metric cards. CLI with sfs summary --today for daily overviews across all sessions.

Optional: LLM narrative

The deterministic layer is always free. When you want a "what happened" and "key decisions" paragraph, add your own LLM key and the summary goes narrative.

Works with the same BYOK setup as the Judge — your key, any OpenAI-compatible endpoint, never stored.

Free tier: deterministic summary, always available
BYOK: narrative "what happened" section
Pro: LLM summary on every session automatically
narrative summary
What happened
Investigated JWT token expiry causing 401s on refresh. Identified the middleware was not renewing the token before expiry window. Applied fix in tokens.py. 5/6 tests now passing.
Key decisions
• Extended expiry window from 60s to 300s
• Added redis dependency for token caching
• Deferred fixing test_auth_edge_case.py (complexity)