arnab2001/Optischema-Slim
The Local-First Doctor for your PostgreSQL. Debug queries, verify indexes with HypoPG, and optimize performance using local LLMs. Zero data egress.
๐ฏ All-In-One Docker Image
OptiSchema Slim now ships as a single ~390MB Docker image containing both the UI and API.
No separate frontend container. No Node.js runtime. One pull, one run.
Why OptiSchema Slim?
- Privacy First: Your schema and queries never leave localhost.
- Simulation Engine: Verify index suggestions with HypoPG before touching production.
- Model Agnostic: Use Ollama (SQLCoder) locally, or bring your own OpenAI/Gemini/DeepSeek keys.
- All-In-One: Single container, single port (8080), built-in static UI serving.
๐ Quickstart Guide
Option 1: The 10-Second Demo (Recommended)
Run an auto-configured environment with a "bad database" to see OptiSchema's analysis in action.
# Clone the repo
git clone https://github.com/arnab2001/Optischema-Slim.git
cd Optischema-Slim
# Start the Demo
docker compose -f docker-compose.demo.yml up --build- URL:
http://localhost:8452 - Scenario: Pre-loaded with slow queries and missing indexes.
Option 2: Run with Your Database
Pull the official image and connect to your local Postgres.
# pulls ~390MB image
docker pull arnab2001/optischema-slim:latest
# run on port 8080
docker run -p 8080:8080 arnab2001/optischema-slim:latest- URL:
http://localhost:8080 - Setup: Enter your
postgres://connection string in the UI.
๐ ๏ธ Development (Local Source Build)
# Backend (FastAPI)
make dev-backend
# Frontend (Next.js)
make dev-frontendFeatures
- Real-time Monitoring: Heatmaps and latency tracking via
pg_stat_statements. - AI Analysis: Context-aware suggestions using your schema and table stats.
- Cost Verification: Compare EXPLAIN costs (Original vs. Virtual Index) side-by-side.
Architecture
The system follows a Collect โ Analyze โ Simulate pipeline designed for distinct safety and performance guarantees:
- Frontend: Vite + React with Tailwind UI & Recharts for real-time visualization.
- Backend: FastAPI paired with AsyncPG for high-conformance, non-blocking I/O.
- Core Engine:
- Metric Collection: Ingests
pg_stat_statementsto fingerprint and rank queries by Total Time and IO. - Context Engine: Enriches queries with live schema definitions, indices, and table statistics (tuple counts, bloat).
- AI Analysis: Router sends sanitized context to the configure LLM (Local/Cloud) to synthesize optimization strategies.
- HypoPG Simulation: Creates virtual indexes in a transient session to verify
EXPLAINcost reductions before suggesting them.
- Metric Collection: Ingests
Configuration / LLM Setup
Click to view Configuration Details
Environment Setup
-
Create a
.envfile from the example:cp .env.example .env
-
To use Ollama:
- Install Ollama and pull the model:
ollama pull sqlcoder:7b - Set
LLM_PROVIDER=ollamain your.env. - Ensure OptiSchema can reach your host (typically
http://host.docker.internal:11434).
- Install Ollama and pull the model:
-
To use Cloud Models:
- Add your
OPENAI_API_KEY,GEMINI_API_KEY, orDEEPSEEK_API_KEYto the.envfile. - Set
LLM_PROVIDERaccordingly (e.g.,openai,gemini).
- Add your
-
Auto-Connection (Optional):
- Set
DATABASE_URL=postgresql://user:pass@host:5432/dbto skip the connection screen on startup.
- Set
Roadmap / Status
- โ Core Metrics
- โ HypoPG Integration
- ๐ง Health Scan (In Progress)
- ๐ง History Persistence
Status: We are currently optimizing the Docker image for the public release. Join the Waitlist to get the v1 image.
Contributing
PRs are welcome! Please check out the backend/services to see how we handle different components.
