Skip to content

Troubleshooting

StatusErrorCauseFix
400Invalid session ID formatSession ID doesn’t match ses_[hex]{8,40}Check ID format — must be ses_ + 8-40 lowercase hex chars
400Payload too largeSync payload exceeds server limitCompact session first or check MAX_UPLOAD_SIZE
401UnauthorizedMissing or expired auth tokenRun sfs auth login
403Email not verifiedAccount needs verificationCheck email for verification link, or set SFS_REQUIRE_EMAIL_VERIFICATION=false
409ETag mismatchConcurrent modificationPull latest with sfs pull, then push again
413Request entity too largeUpload exceeds server limitCheck MAX_UPLOAD_SIZE setting
429Rate limitedToo many requests per minuteWait, or increase SFS_RATE_LIMIT_PER_MINUTE. Set to 0 to disable

SessionFS generates session IDs in the format: ses_ followed by 8-40 lowercase hex characters.

Examples:

  • ses_ae7652a4 (8 chars — short form)
  • ses_346b4d7288214b0f (16 chars — standard)
  • ses_a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2 (40 chars — long form)

All formats are valid for sync, push, pull, and handoff operations.

  1. Check daemon status:

    Terminal window
    sfs daemon status
    sfs daemon logs
  2. Try a manual push (shows detailed errors):

    Terminal window
    sfs push ses_abc12345
  3. Check server logs:

    Terminal window
    kubectl logs deploy/sessionfs-api -n sessionfs --tail=100
  4. Verify authentication:

    Terminal window
    sfs auth status
  5. Check local storage:

    Terminal window
    sfs storage

Check that the MCP service port matches the container port (should be 8080):

Terminal window
kubectl describe deploy sessionfs-mcp -n sessionfs

Verify liveness/readiness probes target the correct port.

Dashboard returns 405 or shows malformed URLs

Section titled “Dashboard returns 405 or shows malformed URLs”

The dashboard must proxy API requests through nginx, not call an external API URL directly. In the Helm chart, the dashboard nginx ConfigMap handles /api/ proxying automatically. If you see URLs like https://your-domain/https://api.sessionfs.dev, the dashboard image was built with a hardcoded API URL. Rebuild with:

Terminal window
docker build --build-arg VITE_API_URL=/api -t sessionfs-dashboard .

S3 ParamValidationError: Invalid bucket name

Section titled “S3 ParamValidationError: Invalid bucket name”

S3 bucket names cannot contain /. If you need a key prefix:

storage:
s3:
bucket: "my-bucket" # Bucket name only, no slashes
prefix: "sessionfs/" # Optional key prefix

The code also handles bucket: "my-bucket/prefix" gracefully by splitting on the first /.

Do not add ?sslmode=require to the database URL. SessionFS handles SSL parameter translation internally. For RDS and Cloud SQL, asyncpg negotiates SSL automatically for non-localhost connections.

# Correct — no sslmode
externalDatabase:
host: mydb.cluster-abc123.us-east-1.rds.amazonaws.com
existingSecret: sessionfs-db
# Wrong — do not add sslMode
externalDatabase:
host: mydb.cluster-abc123.us-east-1.rds.amazonaws.com
sslMode: require # This will cause errors

Rate limiting is per API key, not per IP. Check your configured limit:

api:
rateLimitPerMinute: 120 # Default: 120 requests/min per API key
# Set to 0 to disable rate limiting entirely

Or via environment variable:

Terminal window
SFS_RATE_LIMIT_PER_MINUTE=0 # Disable
SFS_RATE_LIMIT_PER_MINUTE=10000 # Effectively unlimited

Changes require a pod restart to take effect.

Cursor sessions show 0 tool calls / audit returns 0 claims

Section titled “Cursor sessions show 0 tool calls / audit returns 0 claims”

Cursor sessions captured before v0.9.4 may be missing tool calls. The Cursor converter now reads the agentKv:blob: layer which contains full tool call data. To fix existing sessions:

  1. Restart the daemon: sfs daemon restart
  2. The daemon re-scans and re-captures Cursor sessions with the updated converter
  3. Re-push affected sessions: sfs sync

If the audit still returns 0 claims, the session may genuinely have no tool calls (e.g., a question-answer chat without code operations).

If sfs resume --in codex fails with “Failed to resume session”:

  1. Ensure you have Codex CLI installed: codex --version
  2. The resume command is codex resume <uuid> (a subcommand, not a flag)
  3. SessionFS auto-launches Codex after conversion

If Codex shows the session picker but then errors, the rollout file format may be incompatible with your Codex version. Check ~/.codex/sessions/ for the generated .jsonl file.

The handoff claim now copies session data to the recipient’s account. If the recipient gets “Session not found”:

  1. Ensure the handoff was claimed: sfs pull-handoff hnd_xxx
  2. The correct command is sfs pull-handoff (not sfs pull --handoff)
  3. Check that the claim created a copy: the recipient should see the session in sfs list-remote

When resuming a handoff session, the sender’s working directory may not exist on the receiver’s machine. SessionFS automatically falls back to the current working directory. Use --project to specify a different path:

Terminal window
sfs resume ses_abc --in claude-code --project ~/my-project

The daemon and CLI share a SQLite index. If you see “database is locked”, it usually resolves within 5 seconds (busy_timeout is set to 5000ms). If persistent:

  1. Check if multiple daemon instances are running: ps aux | grep sfsd
  2. Stop all daemons: sfs daemon stop
  3. Restart: sfs daemon start
  1. Check mode: sfs sync status
  2. Ensure authenticated: sfs auth status
  3. In selective mode, sessions must be watched: sfs sync watch ses_abc
  4. The daemon polls for settings changes every 60 seconds — wait after changing mode in the dashboard
  5. Check daemon logs: sfs daemon logs

Deleted session keeps reappearing after sync

Section titled “Deleted session keeps reappearing after sync”

This was a known bug before v0.9.9. The old behavior: deleting a session from the dashboard triggered a server soft-delete, but autosync would re-push the local copy and silently un-delete it.

Fix (v0.9.9+): Deletes are now sync-aware. When you delete a session, the ID is added to ~/.sessionfs/deleted.json and autosync skips it in both directions. Upgrade to v0.9.9 or later and the problem goes away.

Soft-deleted sessions are retained for 30 days. After that, an admin can purge them via POST /api/v1/admin/purge-deleted. This hard-deletes all expired sessions (blob + database row). Requires admin role.

Deleted session still counts against storage

Section titled “Deleted session still counts against storage”

Soft-deleted sessions are excluded from storage quota calculations. If you see stale numbers after deleting sessions, the API may be serving a cached response. Wait a few minutes and check again, or run sfs sync status to get a fresh count.