GitHunt
FE

seline agent with a style

Seline

Version
Electron
Next.js
React
TypeScript
Platform
License
PRs Welcome

Seline Demo

Seline is an AI assistant that blends chat, visual tools, and a local knowledge base into a single desktop app. It runs mostly on your machineβ€”your documents stay private, your conversations persist across sessions, and you can switch between LLM providers without leaving the app.

Highlights

  • Chat with configurable agents and keep long-running sessions organized.
  • Enhance prompts with grounded context from your synced folders and memories.
  • Generate and edit images, then assemble them into videos.
  • Run vector search locally with LanceDB for fast, private retrieval.
  • Run commands in your synced/indexed folders

Updates:

  • Fix: Prompt caching now works correctly with AI SDK v6 (providerOptions replaces deprecated experimental_providerOptions). Cache creation and read metrics are properly reported in the observability dashboard.
  • Prompt caching enabled by default for supported providers:
    • Anthropic (direct) - explicit cache breakpoints with configurable TTL (5m default, 1h premium)
    • OpenRouter - passes cache breakpoints to Anthropic/Gemini models; OpenAI, Grok, Moonshot, Groq, and DeepSeek use provider-side automatic caching (no TTL config)
    • Kimi - provider-side automatic context caching (no TTL config)
    • Antigravity / Codex - no prompt caching support
  • 3rd provider added - now supports Antigravity models and Google Antigravity subscription
  • New: Moonshot Kimi K2.5 provider with 256K context, native vision, and thinking mode

MCP Dynamic Configuration

Seline supports dynamic variables in MCP server configurations:

  • ${SYNCED_FOLDER}: Resolves to the path of the primary synced folder for the current character.
  • ${SYNCED_FOLDERS}: Resolves to a comma-separated list of all synced folders.
  • ${SYNCED_FOLDERS_ARRAY}: Resolves to multiple arguments, one for each synced folder (useful for servers like filesystem).

Supported Platforms

  • Windows (installer builds are available).
  • macOS is supported today; DMG distribution is coming soon. You can build macOS packages from source in the meantime.
  • Linux, not tested.

Prerequisites

For end users: none beyond the OS installer.

For developers:

  • Node.js 20+ (22 recommended for Electron 39 native module rebuilds)
  • npm 9+
  • Windows 10/11 or macOS 12+

Installation

npm install

Development Workflow

npm run electron:pack && npm run electron:dev

This runs the Next.js dev server and launches Electron against http://localhost:3000.

Build Commands

# Windows installer + portable
npm run electron:dist:win

# macOS (DMG/dir)
npm run electron:dist:mac

For local packaging without creating installers, use npm run electron:pack. See docs/BUILD.md for the full pipeline.

πŸ“¦ Manual Model Placement

If you prefer to download models manually (or have slow/no internet during Docker build), place them in the paths below. Models are mounted via Docker volumes at runtime.

Z-Image Turbo FP8

Base path: comfyui_backend/ComfyUI/models/

Model Path Download
Checkpoint checkpoints/z-image-turbo-fp8-aio.safetensors HuggingFace
LoRA loras/z-image-detailer.safetensors HuggingFace

FLUX.2 Klein 4B

Base path: comfyui_backend/flux2-klein-4b/volumes/models/

Model Path Download
VAE vae/flux2-vae.safetensors HuggingFace
CLIP clip/qwen_3_4b.safetensors HuggingFace
Diffusion Model diffusion_models/flux-2-klein-base-4b-fp8.safetensors HuggingFace

FLUX.2 Klein 9B

Base path: comfyui_backend/flux2-klein-9b/volumes/models/

Model Path Download
VAE vae/flux2-vae.safetensors HuggingFace
CLIP clip/qwen_3_8b_fp8mixed.safetensors HuggingFace
Diffusion Model diffusion_models/flux-2-klein-base-9b-fp8.safetensors HuggingFace

Example Directory Structure

comfyui_backend/
β”œβ”€β”€ ComfyUI/models/                          # Z-Image models
β”‚   β”œβ”€β”€ checkpoints/
β”‚   β”‚   └── z-image-turbo-fp8-aio.safetensors
β”‚   └── loras/
β”‚       └── z-image-detailer.safetensors
β”‚
β”œβ”€β”€ flux2-klein-4b/volumes/models/           # FLUX.2 Klein 4B models
β”‚   β”œβ”€β”€ vae/
β”‚   β”‚   └── flux2-vae.safetensors
β”‚   β”œβ”€β”€ clip/
β”‚   β”‚   └── qwen_3_4b.safetensors
β”‚   └── diffusion_models/
β”‚       └── flux-2-klein-base-4b-fp8.safetensors
β”‚
└── flux2-klein-9b/volumes/models/           # FLUX.2 Klein 9B models
    β”œβ”€β”€ vae/
    β”‚   └── flux2-vae.safetensors
    β”œβ”€β”€ clip/
    β”‚   └── qwen_3_8b_fp8mixed.safetensors
    └── diffusion_models/
        └── flux-2-klein-base-9b-fp8.safetensors

Note: The VAE (flux2-vae.safetensors) is the same for both Klein 4B and 9B. You can download it once and copy to both locations.

πŸ”„ Swapping LoRAs (Z-Image)

The Z-Image Turbo FP8 workflow uses a LoRA for detail enhancement. You can swap it with any compatible LoRA.

Step 1: Add Your LoRA File

Place your LoRA file in:

comfyui_backend/ComfyUI/models/loras/your-lora-name.safetensors

Step 2: Update the Workflow

Edit comfyui_backend/workflow_to_replace_z_image_fp8.json and find node 41 (LoraLoader):

"41": {
  "inputs": {
    "lora_name": "z-image-detailer.safetensors",  // ← Change this
    "strength_model": 0.5,
    "strength_clip": 1,
    ...
  },
  "class_type": "LoraLoader"
}

Change lora_name to your LoRA filename.

Step 3: Restart the Container

The workflow JSON is mounted as a volume, so just restart:

cd comfyui_backend
docker-compose restart comfyui workflow-api

Troubleshooting

  • Native module errors (better-sqlite3, onnxruntime-node): run npm run electron:rebuild-native before building.
  • Black screen in packaged app: verify .next/standalone and extraResources are correct; see docs/BUILD.md.
  • Missing provider keys: ensure ANTHROPIC_API_KEY, OPENROUTER_API_KEY, or KIMI_API_KEY is configured in settings or .env.
  • Embeddings mismatch errors: reindex Vector Search from Settings or run POST /api/vector-sync with action: "reindex-all".

Documentation

  • docs/ARCHITECTURE.md - system layout and core flows
  • docs/AI_PIPELINES.md - LLM, embeddings, and tool pipelines
  • docs/DEVELOPMENT.md - dev setup, scripts, tests, and build process
  • docs/API.md - internal modules and API endpoints
FellowTraveler/seline | GitHunt