ohnodev/obelisk-core
The consciousness engine for The Obelisk ๐ง
Obelisk Core
Open-source AI agent framework with a visual workflow editor, self-hosted inference, and one-click deployment
๐ Website ยท ๐ X (Twitter) ยท ๐ฌ Telegram
Obelisk Core is an open-source framework for building, running, and deploying AI agents. Design workflows visually, connect to a self-hosted LLM, and deploy autonomous agents โ all from your own hardware.
Status: ๐ก Alpha โ v0.2.0-alpha
How It Works
Obelisk Core uses several services that work together:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Visual Workflow Editor โ โ Browser UI (Next.js)
โ Design agent workflows with โ Build, test, and deploy
โ drag-and-drop nodes โ workflows visually
โโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโ
โ executes
โโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโ
โ TypeScript Execution Engine โ โ Agent Runtime (Node.js)
โ Runs workflows as autonomous โ Nodes: inference, Telegram,
โ agents in Docker containers โ memory, scheduling, Clanker, Polymarket, etc.
โโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโ
โ calls
โโโโโโโโโโโดโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโ
โผ โผ โผ โผ
โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโ
โ Inference โ โ Blockchain โ โ Polymarket โ โ Deployment API โ
โ Service โ โ Service โ โ Service โ โ (Agents) โ
โ (Python) โ โ (Clanker) โ โ (Orders, โ โ Build, deploy, โ
โ Qwen3 local โ โ State, V4 โ โ Redeem, โ โ manage agents โ
โ or Router โ โ swaps โ โ Snapshot) โ โ โ
โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโ
Services:
- Inference Service โ Python FastAPI server with self-hosted Qwen3-0.6B, or use the Router Service (https://router.theobelisk.ai) for hosted LLMs (e.g. Mistral). In the Inference Config node, set
endpoint_urltohttps://router.theobelisk.ai(canonical default). If your router is behind a path-based proxy or the service docs specify a/v1base path, usehttps://router.theobelisk.ai/v1instead. Setagent_id(e.g.clawballs) for the agent to use. - Blockchain Service โ Clanker state API, launch summary, V4 swaps (CabalSwapper); workflows read token/pool data and execute buys/sells
- Polymarket Service โ CLOB orders, redeem positions, market snapshot, probability model; used by Polymarket Sniper workflows
- Deployment Layer โ Deploy workflows as Docker agents from the UI; manage running agents at
/deployments
The Deployment API (build, deploy, manage agents) is separate from the PM2-managed group: PM2 starts/stops only core, inference, blockchain, and polymarket. The Deployment API must be deployed and managed outside PM2. When self-hosting it, configure the service with the required settings (e.g. base URL, authentication tokens if applicable) and run it on a standalone VM, in a container (Docker), or on Kubernetes. The UI expects the deployment service at the URL configured in your environment (e.g. api.theobelisk.ai in production). See docker/README.md for agent container and deploy endpoint details.
The UI is a visual node editor (like ComfyUI). The Execution Engine is a TypeScript runtime that processes workflows node-by-node and runs agents in Docker containers.
Features
- Visual Workflow Editor โ Drag-and-drop node-based editor to design agent logic
- Self-Hosted LLM โ Qwen3-0.6B with thinking mode, no external API required; or use Router Service (https://router.theobelisk.ai) to hook up Mistral or other hosted LLMs via Inference Config (
endpoint_url:https://router.theobelisk.ai,agent_id: e.g.clawballs) - Autonomous Agents โ Deploy workflows as long-running Docker containers
- Telegram Integration โ Listener and sender nodes for building Telegram bots
- Conversation Memory โ Persistent memory with automatic summarization
- Binary Intent โ Yes/no decision nodes for conditional workflow logic
- Wallet Authentication โ Privy-based wallet connect for managing deployed agents
- Clanker / Blockchain โ Blockchain service (obelisk-blockchain), Blockchain Config node, Clanker Launch Summary, Wallet, Clanker Buy/Sell (V4 swaps via CabalSwapper), Action Router; onSwap trigger (last_swap.json) for Bag Checker (profit/stop-loss) โ Clanker Sell
- Polymarket โ Polymarket service (polymarket-service): CLOB orders, redeem, snapshot, probability model; Polymarket Sniper template and nodes
- Scheduling โ Cron-like scheduling nodes for periodic tasks
- One-Click Deploy โ Deploy agents from the UI with environment variable injection
Quick Start
Prerequisites
- Node.js 20+ and npm
- Python 3.10+ โ a CUDA-capable GPU is required only for local self-hosted Qwen inference; not required when using Router-hosted LLMs (e.g. https://router.theobelisk.ai)
- Docker (for running deployed agents)
1. Clone the repo
git clone https://github.com/ohnodev/obelisk-core.git
cd obelisk-core2. Start the Inference Service (Python) โ optional if using Router
The inference service hosts the LLM model and serves it via REST API. Skip this step if you use the Router service (https://router.theobelisk.ai) for hosted LLMs; a GPU is only required for local self-hosted Qwen inference.
# Create Python venv and install dependencies
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# Configure (optional โ defaults work for local dev)
cp .env.example .env
# Edit .env if you want to set an API key or change the port
# Start the inference service
python3 -m uvicorn src.inference.server:app --host 127.0.0.1 --port 7780The first run downloads the Qwen3-0.6B model (~600MB). Once running, test it:
curl http://localhost:7780/health3. Start Blockchain / Polymarket Services (optional)
For Clanker or Polymarket workflows you need the blockchain and polymarket services. For local dev that only uses the default Telegram/inference flow, you can skip this step.
Option A โ PM2 (recommended): start all services including blockchain and polymarket:
./pm2-manager.sh startOption B โ Without PM2: start each service from its directory (see blockchain-service/README.md and polymarket-service/README.md). For example, from the repo root: build and run the blockchain service on port 8888 and the polymarket service on port 1110.
4. Start the Execution Engine (TypeScript)
cd ts
npm install
npm run build
cd ..5. Start the UI
cd ui
npm install
npm run devOpen http://localhost:3000 in your browser. You should see the visual workflow editor.
6. Run your first workflow
- The default workflow is pre-loaded โ it includes a Telegram bot setup
- Click Queue Prompt (โถ) to execute the workflow
- The output appears in the output nodes on the canvas
Using PM2 (Recommended for Production)
We provide a pm2-manager.sh script that manages all services (core, inference, blockchain, polymarket):
# Start everything
./pm2-manager.sh start
# Restart services (clears logs)
./pm2-manager.sh restart
# Stop everything
./pm2-manager.sh stop
# View status
./pm2-manager.sh status
# View logs
./pm2-manager.sh logsPM2 keeps the core API, inference, blockchain, and polymarket services running, auto-restarts on crashes, and manages log files.
Agent Deployment
Agents are workflows packaged into Docker containers that run autonomously.
Building the Agent Image
docker build -t obelisk-agent:latest -f docker/Dockerfile .Deploying from the UI
- Connect your wallet in the UI toolbar
- Design your workflow (or use the default)
- Click Deploy โ the UI sends the workflow to your deployment service
- The agent runs in a Docker container on your machine
- Manage running agents at
/deployments
Running an Agent Manually
When running agents in Docker, the container must reach host services. Set INFERENCE_SERVICE_URL, BLOCKCHAIN_SERVICE_URL, and POLYMARKET_SERVICE_URL to point at the host (e.g. host.docker.internal with the appropriate ports). On native Linux, host.docker.internal is not defined by default โ add --add-host=host.docker.internal:host-gateway so it resolves. Docker Compose users: add extra_hosts: ["host.docker.internal:host-gateway"] to the service for the same effect.
docker run -d \
--add-host=host.docker.internal:host-gateway \
--name my-agent \
-e WORKFLOW_JSON='<your workflow JSON>' \
-e AGENT_ID=agent-001 \
-e AGENT_NAME="My Bot" \
-e INFERENCE_SERVICE_URL=http://host.docker.internal:7780 \
-e BLOCKCHAIN_SERVICE_URL=http://host.docker.internal:8888 \
-e POLYMARKET_SERVICE_URL=http://host.docker.internal:1110 \
-e TELEGRAM_BOT_TOKEN=your_token \
obelisk-agent:latestSee docker/README.md for full details on environment variables, resource limits, and Docker Compose.
Available Nodes
| Node | Description |
|---|---|
| Text | Static text input/output |
| Inference | Calls the LLM via the inference service |
| Inference Config | Configures model parameters (temperature, max tokens, thinking mode) |
| Binary Intent | Yes/no classification for conditional logic |
| Telegram Listener | Polls for incoming Telegram messages |
| TG Send Message | Sends messages via Telegram Bot API (supports quote-reply) |
| Memory Creator | Creates conversation summaries |
| Memory Selector | Retrieves relevant memories for context |
| Memory Storage | Persists memories to storage |
| Telegram Memory Creator | Telegram-specific memory summarization |
| Telegram Memory Selector | Telegram-specific memory retrieval |
| Scheduler | Cron-based scheduling for periodic execution |
Project Structure
obelisk-core/
โโโ src/inference/ # Python inference service (FastAPI + PyTorch)
โ โโโ server.py # REST API server
โ โโโ model.py # LLM loading and generation
โ โโโ queue.py # Async request queue
โ โโโ config.py # Inference configuration
โโโ ts/ # TypeScript execution engine
โ โโโ src/
โ โ โโโ core/ # Workflow runner, node execution
โ โ โ โโโ execution/
โ โ โ โโโ runner.ts
โ โ โ โโโ nodes/ # All node implementations
โ โ โโโ utils/ # JSON parsing, logging, etc.
โ โโโ tests/ # Vitest test suite
โโโ blockchain-service/ # Clanker state API, block processing, V4 swaps
โโโ polymarket-service/ # CLOB orders, redeem, market snapshot, probability model
โโโ ui/ # Next.js visual workflow editor
โ โโโ app/ # Pages (editor, deployments)
โ โโโ components/ # React components (Canvas, Toolbar, nodes)
โ โโโ lib/ # Utilities (litegraph, wallet, API config)
โโโ docker/ # Dockerfile and compose for agent containers
โโโ pm2-manager.sh # PM2 process manager (core, inference, blockchain, polymarket)
โโโ requirements.txt # Python deps (inference service only)
โโโ .env.example # Environment variable template
Configuration
Copy .env.example to .env:
cp .env.example .envKey variables:
| Variable | Description | Default |
|---|---|---|
INFERENCE_HOST |
Inference service bind address | 127.0.0.1 |
INFERENCE_PORT |
Inference service port | 7780 |
INFERENCE_API_KEY |
API key for inference auth (optional for local dev) | โ |
INFERENCE_DEVICE |
PyTorch device (cuda, cpu) |
auto-detect |
INFERENCE_SERVICE_URL |
URL agents use to reach inference | http://localhost:7780 |
BLOCKCHAIN_SERVICE_URL |
Blockchain service (Clanker state, etc.) | http://localhost:8888 |
POLYMARKET_SERVICE_URL |
Polymarket service (orders, redeem, snapshot) | http://localhost:1110 |
TELEGRAM_DEV_AGENT_BOT_TOKEN |
Default Telegram bot token for dev | โ |
TELEGRAM_CHAT_ID |
Default Telegram chat ID for dev | โ |
For remote inference setup (GPU VPS), see INFERENCE_SERVER_SETUP.md.
Documentation
- Quick Start Guide โ Get running in 5 minutes
- Inference API โ Inference service endpoints
- Inference Server Setup โ Deploy inference on a GPU VPS
- Docker Agents โ Build and run agent containers
- UI Guide โ Visual workflow editor
- Contributing โ How to contribute
- Security โ Security best practices
- Changelog โ Version history
License
This project is licensed under the MIT License โ see the LICENSE file for details.
Contributing
Contributions are welcome! See CONTRIBUTING.md for guidelines.
Built with โค๏ธ by The Obelisk
