FellowTraveler/seline
seline agent with a style
Seline
Seline is an AI assistant that blends chat, visual tools, and a local knowledge base into a single desktop app. It runs mostly on your machineβyour documents stay private, your conversations persist across sessions, and you can switch between LLM providers without leaving the app.
Highlights
- Chat with configurable agents and keep long-running sessions organized.
- Enhance prompts with grounded context from your synced folders and memories.
- Generate and edit images, then assemble them into videos.
- Run vector search locally with LanceDB for fast, private retrieval.
- Run commands in your synced/indexed folders
Updates:
- Fix: Prompt caching now works correctly with AI SDK v6 (
providerOptionsreplaces deprecatedexperimental_providerOptions). Cache creation and read metrics are properly reported in the observability dashboard. - Prompt caching enabled by default for supported providers:
- Anthropic (direct) - explicit cache breakpoints with configurable TTL (5m default, 1h premium)
- OpenRouter - passes cache breakpoints to Anthropic/Gemini models; OpenAI, Grok, Moonshot, Groq, and DeepSeek use provider-side automatic caching (no TTL config)
- Kimi - provider-side automatic context caching (no TTL config)
- Antigravity / Codex - no prompt caching support
- 3rd provider added - now supports Antigravity models and Google Antigravity subscription
- New: Moonshot Kimi K2.5 provider with 256K context, native vision, and thinking mode
MCP Dynamic Configuration
Seline supports dynamic variables in MCP server configurations:
${SYNCED_FOLDER}: Resolves to the path of the primary synced folder for the current character.${SYNCED_FOLDERS}: Resolves to a comma-separated list of all synced folders.${SYNCED_FOLDERS_ARRAY}: Resolves to multiple arguments, one for each synced folder (useful for servers likefilesystem).
Supported Platforms
- Windows (installer builds are available).
- macOS is supported today; DMG distribution is coming soon. You can build macOS packages from source in the meantime.
- Linux, not tested.
Prerequisites
For end users: none beyond the OS installer.
For developers:
- Node.js 20+ (22 recommended for Electron 39 native module rebuilds)
- npm 9+
- Windows 10/11 or macOS 12+
Installation
npm installDevelopment Workflow
npm run electron:pack && npm run electron:devThis runs the Next.js dev server and launches Electron against http://localhost:3000.
Build Commands
# Windows installer + portable
npm run electron:dist:win
# macOS (DMG/dir)
npm run electron:dist:macFor local packaging without creating installers, use npm run electron:pack. See docs/BUILD.md for the full pipeline.
π¦ Manual Model Placement
If you prefer to download models manually (or have slow/no internet during Docker build), place them in the paths below. Models are mounted via Docker volumes at runtime.
Z-Image Turbo FP8
Base path: comfyui_backend/ComfyUI/models/
| Model | Path | Download |
|---|---|---|
| Checkpoint | checkpoints/z-image-turbo-fp8-aio.safetensors |
HuggingFace |
| LoRA | loras/z-image-detailer.safetensors |
HuggingFace |
FLUX.2 Klein 4B
Base path: comfyui_backend/flux2-klein-4b/volumes/models/
| Model | Path | Download |
|---|---|---|
| VAE | vae/flux2-vae.safetensors |
HuggingFace |
| CLIP | clip/qwen_3_4b.safetensors |
HuggingFace |
| Diffusion Model | diffusion_models/flux-2-klein-base-4b-fp8.safetensors |
HuggingFace |
FLUX.2 Klein 9B
Base path: comfyui_backend/flux2-klein-9b/volumes/models/
| Model | Path | Download |
|---|---|---|
| VAE | vae/flux2-vae.safetensors |
HuggingFace |
| CLIP | clip/qwen_3_8b_fp8mixed.safetensors |
HuggingFace |
| Diffusion Model | diffusion_models/flux-2-klein-base-9b-fp8.safetensors |
HuggingFace |
Example Directory Structure
comfyui_backend/
βββ ComfyUI/models/ # Z-Image models
β βββ checkpoints/
β β βββ z-image-turbo-fp8-aio.safetensors
β βββ loras/
β βββ z-image-detailer.safetensors
β
βββ flux2-klein-4b/volumes/models/ # FLUX.2 Klein 4B models
β βββ vae/
β β βββ flux2-vae.safetensors
β βββ clip/
β β βββ qwen_3_4b.safetensors
β βββ diffusion_models/
β βββ flux-2-klein-base-4b-fp8.safetensors
β
βββ flux2-klein-9b/volumes/models/ # FLUX.2 Klein 9B models
βββ vae/
β βββ flux2-vae.safetensors
βββ clip/
β βββ qwen_3_8b_fp8mixed.safetensors
βββ diffusion_models/
βββ flux-2-klein-base-9b-fp8.safetensors
Note: The VAE (
flux2-vae.safetensors) is the same for both Klein 4B and 9B. You can download it once and copy to both locations.
π Swapping LoRAs (Z-Image)
The Z-Image Turbo FP8 workflow uses a LoRA for detail enhancement. You can swap it with any compatible LoRA.
Step 1: Add Your LoRA File
Place your LoRA file in:
comfyui_backend/ComfyUI/models/loras/your-lora-name.safetensors
Step 2: Update the Workflow
Edit comfyui_backend/workflow_to_replace_z_image_fp8.json and find node 41 (LoraLoader):
"41": {
"inputs": {
"lora_name": "z-image-detailer.safetensors", // β Change this
"strength_model": 0.5,
"strength_clip": 1,
...
},
"class_type": "LoraLoader"
}Change lora_name to your LoRA filename.
Step 3: Restart the Container
The workflow JSON is mounted as a volume, so just restart:
cd comfyui_backend
docker-compose restart comfyui workflow-apiTroubleshooting
- Native module errors (
better-sqlite3,onnxruntime-node): runnpm run electron:rebuild-nativebefore building. - Black screen in packaged app: verify
.next/standaloneandextraResourcesare correct; seedocs/BUILD.md. - Missing provider keys: ensure
ANTHROPIC_API_KEY,OPENROUTER_API_KEY, orKIMI_API_KEYis configured in settings or.env. - Embeddings mismatch errors: reindex Vector Search from Settings or run
POST /api/vector-syncwithaction: "reindex-all".
Documentation
docs/ARCHITECTURE.md- system layout and core flowsdocs/AI_PIPELINES.md- LLM, embeddings, and tool pipelinesdocs/DEVELOPMENT.md- dev setup, scripts, tests, and build processdocs/API.md- internal modules and API endpoints
