GitHunt
OH

ohnodev/obelisk-core

The consciousness engine for The Obelisk ๐Ÿง 

Obelisk Core

Obelisk Core

Open-source AI agent framework with a visual workflow editor, self-hosted inference, and one-click deployment

Version MIT License Status TypeScript Python (Inference)

๐ŸŒ Website ยท ๐• X (Twitter) ยท ๐Ÿ’ฌ Telegram

Obelisk Core is an open-source framework for building, running, and deploying AI agents. Design workflows visually, connect to a self-hosted LLM, and deploy autonomous agents โ€” all from your own hardware.

Status: ๐ŸŸก Alpha โ€” v0.2.0-alpha


How It Works

Obelisk Core uses several services that work together:

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚         Visual Workflow Editor   โ”‚     โ† Browser UI (Next.js)
โ”‚   Design agent workflows with    โ”‚     Build, test, and deploy
โ”‚   drag-and-drop nodes            โ”‚     workflows visually
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
               โ”‚ executes
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚      TypeScript Execution Engine โ”‚     โ† Agent Runtime (Node.js)
โ”‚   Runs workflows as autonomous   โ”‚     Nodes: inference, Telegram,
โ”‚   agents in Docker containers    โ”‚     memory, scheduling, Clanker, Polymarket, etc.
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
               โ”‚ calls
     โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
     โ–ผ                   โ–ผ                 โ–ผ                  โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  Inference   โ”‚  โ”‚  Blockchain  โ”‚  โ”‚  Polymarket  โ”‚  โ”‚  Deployment API  โ”‚
โ”‚  Service     โ”‚  โ”‚  Service     โ”‚  โ”‚  Service     โ”‚  โ”‚  (Agents)        โ”‚
โ”‚  (Python)    โ”‚  โ”‚  (Clanker)   โ”‚  โ”‚  (Orders,    โ”‚  โ”‚  Build, deploy,  โ”‚
โ”‚  Qwen3 local โ”‚  โ”‚  State, V4   โ”‚  โ”‚  Redeem,     โ”‚  โ”‚  manage agents   โ”‚
โ”‚  or Router   โ”‚  โ”‚  swaps       โ”‚  โ”‚  Snapshot)   โ”‚  โ”‚                  โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Services:

  1. Inference Service โ€” Python FastAPI server with self-hosted Qwen3-0.6B, or use the Router Service (https://router.theobelisk.ai) for hosted LLMs (e.g. Mistral). In the Inference Config node, set endpoint_url to https://router.theobelisk.ai (canonical default). If your router is behind a path-based proxy or the service docs specify a /v1 base path, use https://router.theobelisk.ai/v1 instead. Set agent_id (e.g. clawballs) for the agent to use.
  2. Blockchain Service โ€” Clanker state API, launch summary, V4 swaps (CabalSwapper); workflows read token/pool data and execute buys/sells
  3. Polymarket Service โ€” CLOB orders, redeem positions, market snapshot, probability model; used by Polymarket Sniper workflows
  4. Deployment Layer โ€” Deploy workflows as Docker agents from the UI; manage running agents at /deployments

The Deployment API (build, deploy, manage agents) is separate from the PM2-managed group: PM2 starts/stops only core, inference, blockchain, and polymarket. The Deployment API must be deployed and managed outside PM2. When self-hosting it, configure the service with the required settings (e.g. base URL, authentication tokens if applicable) and run it on a standalone VM, in a container (Docker), or on Kubernetes. The UI expects the deployment service at the URL configured in your environment (e.g. api.theobelisk.ai in production). See docker/README.md for agent container and deploy endpoint details.

The UI is a visual node editor (like ComfyUI). The Execution Engine is a TypeScript runtime that processes workflows node-by-node and runs agents in Docker containers.

Features

  • Visual Workflow Editor โ€” Drag-and-drop node-based editor to design agent logic
  • Self-Hosted LLM โ€” Qwen3-0.6B with thinking mode, no external API required; or use Router Service (https://router.theobelisk.ai) to hook up Mistral or other hosted LLMs via Inference Config (endpoint_url: https://router.theobelisk.ai, agent_id: e.g. clawballs)
  • Autonomous Agents โ€” Deploy workflows as long-running Docker containers
  • Telegram Integration โ€” Listener and sender nodes for building Telegram bots
  • Conversation Memory โ€” Persistent memory with automatic summarization
  • Binary Intent โ€” Yes/no decision nodes for conditional workflow logic
  • Wallet Authentication โ€” Privy-based wallet connect for managing deployed agents
  • Clanker / Blockchain โ€” Blockchain service (obelisk-blockchain), Blockchain Config node, Clanker Launch Summary, Wallet, Clanker Buy/Sell (V4 swaps via CabalSwapper), Action Router; onSwap trigger (last_swap.json) for Bag Checker (profit/stop-loss) โ†’ Clanker Sell
  • Polymarket โ€” Polymarket service (polymarket-service): CLOB orders, redeem, snapshot, probability model; Polymarket Sniper template and nodes
  • Scheduling โ€” Cron-like scheduling nodes for periodic tasks
  • One-Click Deploy โ€” Deploy agents from the UI with environment variable injection

Quick Start

Prerequisites

  • Node.js 20+ and npm
  • Python 3.10+ โ€” a CUDA-capable GPU is required only for local self-hosted Qwen inference; not required when using Router-hosted LLMs (e.g. https://router.theobelisk.ai)
  • Docker (for running deployed agents)

1. Clone the repo

git clone https://github.com/ohnodev/obelisk-core.git
cd obelisk-core

2. Start the Inference Service (Python) โ€” optional if using Router

The inference service hosts the LLM model and serves it via REST API. Skip this step if you use the Router service (https://router.theobelisk.ai) for hosted LLMs; a GPU is only required for local self-hosted Qwen inference.

# Create Python venv and install dependencies
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

# Configure (optional โ€” defaults work for local dev)
cp .env.example .env
# Edit .env if you want to set an API key or change the port

# Start the inference service
python3 -m uvicorn src.inference.server:app --host 127.0.0.1 --port 7780

The first run downloads the Qwen3-0.6B model (~600MB). Once running, test it:

curl http://localhost:7780/health

3. Start Blockchain / Polymarket Services (optional)

For Clanker or Polymarket workflows you need the blockchain and polymarket services. For local dev that only uses the default Telegram/inference flow, you can skip this step.

Option A โ€” PM2 (recommended): start all services including blockchain and polymarket:

./pm2-manager.sh start

Option B โ€” Without PM2: start each service from its directory (see blockchain-service/README.md and polymarket-service/README.md). For example, from the repo root: build and run the blockchain service on port 8888 and the polymarket service on port 1110.

4. Start the Execution Engine (TypeScript)

cd ts
npm install
npm run build
cd ..

5. Start the UI

cd ui
npm install
npm run dev

Open http://localhost:3000 in your browser. You should see the visual workflow editor.

6. Run your first workflow

  1. The default workflow is pre-loaded โ€” it includes a Telegram bot setup
  2. Click Queue Prompt (โ–ถ) to execute the workflow
  3. The output appears in the output nodes on the canvas

We provide a pm2-manager.sh script that manages all services (core, inference, blockchain, polymarket):

# Start everything
./pm2-manager.sh start

# Restart services (clears logs)
./pm2-manager.sh restart

# Stop everything
./pm2-manager.sh stop

# View status
./pm2-manager.sh status

# View logs
./pm2-manager.sh logs

PM2 keeps the core API, inference, blockchain, and polymarket services running, auto-restarts on crashes, and manages log files.

Agent Deployment

Agents are workflows packaged into Docker containers that run autonomously.

Building the Agent Image

docker build -t obelisk-agent:latest -f docker/Dockerfile .

Deploying from the UI

  1. Connect your wallet in the UI toolbar
  2. Design your workflow (or use the default)
  3. Click Deploy โ€” the UI sends the workflow to your deployment service
  4. The agent runs in a Docker container on your machine
  5. Manage running agents at /deployments

Running an Agent Manually

When running agents in Docker, the container must reach host services. Set INFERENCE_SERVICE_URL, BLOCKCHAIN_SERVICE_URL, and POLYMARKET_SERVICE_URL to point at the host (e.g. host.docker.internal with the appropriate ports). On native Linux, host.docker.internal is not defined by default โ€” add --add-host=host.docker.internal:host-gateway so it resolves. Docker Compose users: add extra_hosts: ["host.docker.internal:host-gateway"] to the service for the same effect.

docker run -d \
  --add-host=host.docker.internal:host-gateway \
  --name my-agent \
  -e WORKFLOW_JSON='<your workflow JSON>' \
  -e AGENT_ID=agent-001 \
  -e AGENT_NAME="My Bot" \
  -e INFERENCE_SERVICE_URL=http://host.docker.internal:7780 \
  -e BLOCKCHAIN_SERVICE_URL=http://host.docker.internal:8888 \
  -e POLYMARKET_SERVICE_URL=http://host.docker.internal:1110 \
  -e TELEGRAM_BOT_TOKEN=your_token \
  obelisk-agent:latest

See docker/README.md for full details on environment variables, resource limits, and Docker Compose.

Available Nodes

Node Description
Text Static text input/output
Inference Calls the LLM via the inference service
Inference Config Configures model parameters (temperature, max tokens, thinking mode)
Binary Intent Yes/no classification for conditional logic
Telegram Listener Polls for incoming Telegram messages
TG Send Message Sends messages via Telegram Bot API (supports quote-reply)
Memory Creator Creates conversation summaries
Memory Selector Retrieves relevant memories for context
Memory Storage Persists memories to storage
Telegram Memory Creator Telegram-specific memory summarization
Telegram Memory Selector Telegram-specific memory retrieval
Scheduler Cron-based scheduling for periodic execution

Project Structure

obelisk-core/
โ”œโ”€โ”€ src/inference/          # Python inference service (FastAPI + PyTorch)
โ”‚   โ”œโ”€โ”€ server.py           # REST API server
โ”‚   โ”œโ”€โ”€ model.py            # LLM loading and generation
โ”‚   โ”œโ”€โ”€ queue.py            # Async request queue
โ”‚   โ””โ”€โ”€ config.py           # Inference configuration
โ”œโ”€โ”€ ts/                     # TypeScript execution engine
โ”‚   โ”œโ”€โ”€ src/
โ”‚   โ”‚   โ”œโ”€โ”€ core/           # Workflow runner, node execution
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ execution/
โ”‚   โ”‚   โ”‚       โ”œโ”€โ”€ runner.ts
โ”‚   โ”‚   โ”‚       โ””โ”€โ”€ nodes/  # All node implementations
โ”‚   โ”‚   โ””โ”€โ”€ utils/          # JSON parsing, logging, etc.
โ”‚   โ””โ”€โ”€ tests/              # Vitest test suite
โ”œโ”€โ”€ blockchain-service/     # Clanker state API, block processing, V4 swaps
โ”œโ”€โ”€ polymarket-service/     # CLOB orders, redeem, market snapshot, probability model
โ”œโ”€โ”€ ui/                     # Next.js visual workflow editor
โ”‚   โ”œโ”€โ”€ app/                # Pages (editor, deployments)
โ”‚   โ”œโ”€โ”€ components/         # React components (Canvas, Toolbar, nodes)
โ”‚   โ””โ”€โ”€ lib/                # Utilities (litegraph, wallet, API config)
โ”œโ”€โ”€ docker/                 # Dockerfile and compose for agent containers
โ”œโ”€โ”€ pm2-manager.sh          # PM2 process manager (core, inference, blockchain, polymarket)
โ”œโ”€โ”€ requirements.txt        # Python deps (inference service only)
โ””โ”€โ”€ .env.example            # Environment variable template

Configuration

Copy .env.example to .env:

cp .env.example .env

Key variables:

Variable Description Default
INFERENCE_HOST Inference service bind address 127.0.0.1
INFERENCE_PORT Inference service port 7780
INFERENCE_API_KEY API key for inference auth (optional for local dev) โ€”
INFERENCE_DEVICE PyTorch device (cuda, cpu) auto-detect
INFERENCE_SERVICE_URL URL agents use to reach inference http://localhost:7780
BLOCKCHAIN_SERVICE_URL Blockchain service (Clanker state, etc.) http://localhost:8888
POLYMARKET_SERVICE_URL Polymarket service (orders, redeem, snapshot) http://localhost:1110
TELEGRAM_DEV_AGENT_BOT_TOKEN Default Telegram bot token for dev โ€”
TELEGRAM_CHAT_ID Default Telegram chat ID for dev โ€”

For remote inference setup (GPU VPS), see INFERENCE_SERVER_SETUP.md.

Documentation

License

This project is licensed under the MIT License โ€” see the LICENSE file for details.

Contributing

Contributions are welcome! See CONTRIBUTING.md for guidelines.


Built with โค๏ธ by The Obelisk