Skip to content

Self-Hosted Deployment

Deploy SessionFS on your own Kubernetes cluster.

SessionFS can be deployed to any Kubernetes cluster using the official Helm chart. The deployment includes:

  • API Server — FastAPI application handling session CRUD, sync, and authentication
  • MCP Server — Model Context Protocol bridge (optional)
  • Web Dashboard — React management interface (optional)
  • PostgreSQL — Built-in or external database
  • Blob Storage — Local PVC, Amazon S3, or Google Cloud Storage
  • Kubernetes 1.26 or later
  • Helm 3.12 or later
  • kubectl configured for your cluster
  • A PersistentVolume provisioner (most managed clusters include one)

SessionFS ships with a hardened default security posture — no action required on your part. The Helm chart and container images meet CIS Kubernetes benchmark and Pod Security Standard restricted profile.

Both sessionfs-api and sessionfs-mcp Docker images:

  • Run as non-root user (UID 10001, dedicated sessionfs system user)
  • Ship with no shell or package manager for the runtime user
  • Are scanned on every release via trivy (CRITICAL/HIGH findings block the pipeline)

Every pod declared by the chart — API, MCP, dashboard, PostgreSQL, and the helm test hook — runs with:

SettingValue
runAsNonRoottrue
runAsUser10001 (or 999 for PostgreSQL, matching upstream convention)
readOnlyRootFilesystemtrue
allowPrivilegeEscalationfalse
capabilities.drop[ALL]
seccompProfile.typeRuntimeDefault

The PostgreSQL container mounts emptyDir volumes at /tmp and /var/run/postgresql so it can write its socket directory and temp files even with a read-only root filesystem. The persistent data volume (/var/lib/postgresql/data) uses a standard PVC.

The chart does not ship NetworkPolicies by default so the chart works on clusters without a CNI that supports them. To apply restrictive NetworkPolicies:

values.yaml
networkPolicy:
enabled: true # Coming in a future chart release

For now, enforce network isolation at the namespace level via your CNI (Cilium, Calico, or similar).

All secrets (database credentials, verification secret, encryption key, SMTP credentials, Resend API key) live in Kubernetes Secret objects, never in ConfigMaps. See Secrets Management below.

You can verify the security posture of a rendered chart with trivy:

Terminal window
helm template sessionfs sessionfs/sessionfs \
--namespace sessionfs \
> /tmp/rendered.yaml
trivy config /tmp/rendered.yaml --severity CRITICAL,HIGH

A clean scan should report zero CRITICAL or HIGH misconfigurations.

Terminal window
helm repo add sessionfs https://charts.sessionfs.dev
helm repo update
Terminal window
kubectl create namespace sessionfs

Single replica, built-in PostgreSQL, no ingress:

Terminal window
helm install sessionfs sessionfs/sessionfs \
-f values.minimal.yaml \
--namespace sessionfs

Access via port-forward:

Terminal window
kubectl port-forward svc/sessionfs-api 8000:8000 -n sessionfs

Two API replicas, built-in PostgreSQL, ingress enabled:

Terminal window
helm install sessionfs sessionfs/sessionfs \
--namespace sessionfs \
--set ingress.hosts[0].host=sessionfs.yourdomain.com

External database, cloud storage, autoscaling, network policies:

Terminal window
helm install sessionfs sessionfs/sessionfs \
-f values.production.yaml \
--namespace sessionfs \
--set postgresql.enabled=false \
--set externalDatabase.existingSecret=sessionfs-db \
--set security.existingSecret=sessionfs-secrets \
--set storage.type=s3 \
--set storage.s3.bucket=my-sessionfs-bucket \
--set ingress.enabled=true \
--set ingress.className=nginx \
--set ingress.hosts[0].host=sessionfs.yourdomain.com \
--set ingress.hosts[0].paths.api=/api \
--set ingress.hosts[0].paths.mcp=/mcp \
--set ingress.hosts[0].paths.dashboard=/

SessionFS requires several secrets for operation. You can either provide them inline in values.yaml (not recommended for production) or reference pre-existing Kubernetes secrets.

Terminal window
# Application secrets
kubectl create secret generic sessionfs-secrets \
--namespace sessionfs \
--from-literal=verification-secret="$(openssl rand -hex 32)" \
--from-literal=encryption-key="$(openssl rand -hex 32)" \
--from-literal=resend-api-key="re_your_key_here"
# External database (if not using built-in PostgreSQL)
kubectl create secret generic sessionfs-db \
--namespace sessionfs \
--from-literal=database-url="postgresql+asyncpg://user:pass@host:5432/sessionfs"

Then reference them in your Helm values:

security:
existingSecret: sessionfs-secrets
externalDatabase:
existingSecret: sessionfs-db

Uses a PersistentVolumeClaim. Suitable for single-node clusters or evaluation.

storage:
type: local
local:
persistence:
enabled: true
size: 10Gi
storage:
type: s3
s3:
bucket: my-sessionfs-bucket # Bucket name only — no slashes
region: us-east-1
prefix: "" # Optional key prefix (e.g. "sessionfs/")

Using IRSA (IAM Roles for Service Accounts):

If your EKS nodes use IRSA, no additional credentials are needed. Attach this IAM policy to the service account role:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-sessionfs-bucket",
"arn:aws:s3:::my-sessionfs-bucket/*"
]
}
]
}

Annotate the service account in your Helm values:

serviceAccount:
create: true
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/sessionfs-s3-role

Using static credentials:

Terminal window
kubectl create secret generic aws-creds \
--namespace sessionfs \
--from-literal=aws-access-key-id=AKIA... \
--from-literal=aws-secret-access-key=...
storage:
s3:
existingSecret: aws-creds
storage:
type: gcs
gcs:
bucket: my-sessionfs-bucket

If using Workload Identity, no additional credentials are needed. Otherwise:

Terminal window
kubectl create secret generic gcs-creds \
--namespace sessionfs \
--from-file=gcs-credentials-json=./sa-key.json
storage:
gcs:
existingSecret: gcs-creds

The chart deploys a single-replica PostgreSQL StatefulSet. Suitable for small deployments.

postgresql:
enabled: true
auth:
username: sessionfs
database: sessionfs
persistence:
size: 10Gi

For production, use a managed PostgreSQL service (AWS RDS, GCP Cloud SQL, Azure Database for PostgreSQL).

postgresql:
enabled: false
externalDatabase:
host: your-db-host.region.rds.amazonaws.com
port: 5432
username: sessionfs
database: sessionfs
existingSecret: sessionfs-db

Important: Do NOT add sslMode or ?sslmode=require to the connection URL. SessionFS handles SSL negotiation automatically via asyncpg. For RDS and Cloud SQL, SSL is negotiated transparently for non-localhost connections.

Configure TLS through your ingress controller. Example with cert-manager:

ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
hosts:
- host: sessionfs.yourdomain.com
paths:
api: /api
mcp: /mcp
dashboard: /
tls:
- secretName: sessionfs-tls
hosts:
- sessionfs.yourdomain.com

Enable Prometheus ServiceMonitor (requires prometheus-operator):

monitoring:
serviceMonitor:
enabled: true
interval: 30s
labels:
release: prometheus
Terminal window
helm repo update
helm upgrade sessionfs sessionfs/sessionfs \
--namespace sessionfs \
--reuse-values

Database migrations run automatically as a Helm post-upgrade hook.

Terminal window
kubectl get pods -n sessionfs
kubectl describe pod <pod-name> -n sessionfs
Terminal window
kubectl logs -n sessionfs -l app.kubernetes.io/component=api --tail=100
Terminal window
kubectl get jobs -n sessionfs -l app.kubernetes.io/component=migration
kubectl logs -n sessionfs job/sessionfs-migrate-<revision>
Terminal window
helm test sessionfs --namespace sessionfs

Pods stuck in Pending: Check that your cluster has a PersistentVolume provisioner and sufficient resources.

Database connection errors: Verify the database URL and credentials. For external databases, ensure network connectivity (security groups, VPC peering).

Ingress not working: Confirm your ingress controller is installed and the ingress class name matches.

Security context errors (runAsNonRoot): SessionFS images currently run as root. Set security contexts in values.yaml:

api:
podSecurityContext:
runAsNonRoot: false

asyncpg SSL errors: Do not add ?sslmode=require to the database URL — SessionFS handles SSL parameter translation internally. For RDS/Cloud SQL, asyncpg negotiates SSL automatically for non-localhost connections.

After deployment, the API is documented via OpenAPI:

  • Swagger UI: https://your-domain/api/docs
  • ReDoc: https://your-domain/api/redoc
  • OpenAPI JSON: https://your-domain/api/openapi.json

Key endpoints:

EndpointDescription
POST /api/v1/auth/signupCreate account
GET /api/v1/auth/meCurrent user profile
GET /api/v1/sessionsList sessions
POST /api/v1/sessions/{id}/auditRun LLM Judge
GET /healthHealth check

SessionFS supports three email providers: Resend (SaaS), SMTP (enterprise), or none (air-gapped).

Terminal window
helm install sessionfs sessionfs/sessionfs \
--set email.provider=resend \
--set email.resend.apiKey=$RESEND_KEY
Terminal window
helm install sessionfs sessionfs/sessionfs \
--set email.provider=smtp \
--set email.smtp.host=smtp.company.internal \
--set email.smtp.port=587 \
--set email.smtp.username=sessionfs \
--set email.smtp.password=$SMTP_PASS \
--set email.fromAddress=sessionfs@company.com

For implicit SSL (port 465):

Terminal window
--set email.smtp.port=465 \
--set email.smtp.ssl=true \
--set email.smtp.tls=false
Terminal window
helm install sessionfs sessionfs/sessionfs \
--set email.provider=none \
--set api.env.SFS_REQUIRE_EMAIL_VERIFICATION=false

Users will be auto-verified on signup. Email notifications (handoff, retention) will be logged but not sent.

Terminal window
kubectl create secret generic smtp-creds \
--namespace sessionfs \
--from-literal=username=sessionfs \
--from-literal=password=$SMTP_PASS
helm install sessionfs sessionfs/sessionfs \
--set email.provider=smtp \
--set email.smtp.host=smtp.company.internal \
--set email.smtp.existingSecret=smtp-creds

Migrations run automatically on helm install and helm upgrade via a post-install/post-upgrade hook. The migration job runs before the API pods start (hook weight -5).

Terminal window
kubectl exec -it deploy/sessionfs-api -- alembic upgrade head

If the migration job fails, check logs:

Terminal window
kubectl logs job/sessionfs-migrate-<revision> -n sessionfs

The API will fail to start if tables don’t exist. Ensure migrations complete first. The Helm hook handles ordering automatically.

Rate limiting is per API key, defaulting to 120 requests per minute.

api:
rateLimitPerMinute: 120 # Requests per minute per API key
# Set to 0 to disable rate limiting (recommended for internal deployments)

Or via environment variable:

api:
env:
SFS_RATE_LIMIT_PER_MINUTE: "0" # Disable for internal deployments

SessionFS uses a reverse proxy architecture. All external traffic routes through the dashboard nginx, which proxies /api/ and /mcp/ to internal ClusterIP services:

Internet -> Ingress -> Dashboard Nginx -> API Service (ClusterIP)
-> MCP Service (ClusterIP)
-> Static Files

The ingress template routes everything to the dashboard service. Do NOT create separate ingress rules for /api or /mcp — this causes ALB target groups to stay in “unused” state.

The dashboard nginx has client_max_body_size configured (default 100MB). Override in values.yaml:

dashboard:
clientMaxBodySize: "200m"

SessionFS supports GitLab merge request comments (cloud and self-hosted instances).

  1. Create a GitLab personal or project access token with api scope
  2. Add a webhook to your GitLab project:
    • URL: https://your-sessionfs-domain/webhooks/gitlab
    • Secret token: set in your Helm values or SFS_GITLAB_WEBHOOK_SECRET env var
    • Events: “Merge request events”
  3. Configure in the dashboard Settings page or via Helm:
api:
env:
SFS_GITLAB_WEBHOOK_SECRET: "your-webhook-secret"

GitLab MR comments include the same AI session context as GitHub PR comments, including audit findings when available.

See Environment Variables Reference for the complete list of SFS_* configuration options.

See Troubleshooting Guide for common errors, sync debugging, and Kubernetes-specific issues.