harunnryd/heike
Heike — The deterministic runtime for reliable AI agents. No more prompt roulette. 侍
Heike
Deterministic AI Runtime for Production-Grade Agent Systems
Reproducible execution · Policy-gated tools · Workspace-safe operations
At A Glance
- Deterministic Core:
plan -> think -> act -> reflectwith explicit phase boundaries. - Governed Tooling: policy + approval + sandbox enforced in runtime paths.
- Production Runtime: shared core for
runanddaemonmodes.
Why Heike
- Non-reproducible behavior across runs: deterministic loop with explicit phase boundaries.
- Unsafe tool execution paths: policy gate and approval workflow in the tool runner path.
- Session/workspace race conditions: session locking and single-writer workspace model.
- Divergent local vs service behavior: shared runtime core for interactive and daemon modes.
- Hidden runtime behavior: config-driven execution (
config.yaml) with explicit defaults.
Heike is not anti LLM-first frameworks. It treats deterministic orchestration and governance as runtime invariants.
Project Status
Heike is in beta. Current focus is runtime correctness, deterministic behavior, and operator safety over feature sprawl.
Start Here
- At A Glance
- Why Heike
- Project Status
- System Topology
- Quick Start
- Provider Setup Paths
- Adapter Setup (Slack and Telegram)
- Deterministic Execution Contract
- End-to-End Request Flow
- Worker Lanes (Interactive vs Background)
- Minimal Config Baseline
- Governance and Approval Workflow
- Skill Runtime Model
- Persistence Model
- Run Modes
- Tooling Highlights
- Quality Gates
- Documentation Map
- Contributing and Community
- Release Assets
System Topology
flowchart LR
subgraph CH["Event Channels"]
C1["CLI (run)"]
C2["HTTP (/v1/events)"]
C3["Scheduler Tick"]
end
subgraph RT["Runtime Core"]
I["Ingress"]
W["Workers"]
K["Orchestrator Kernel"]
CE["Cognitive Engine"]
TR["Tool Runner"]
EG["Egress"]
end
subgraph GOV["Governance"]
PE["Policy Engine"]
AP["Approvals State"]
end
subgraph STORE["Durable State"]
SW["Store Worker"]
WS["Workspace Files"]
end
C1 --> I
C2 --> I
C3 --> I
I --> W --> K --> CE --> TR
TR --> PE --> AP
K --> EG
W --> SW
K --> SW
SW --> WSQuick Start
Pick one path below. Do not mix commands across paths.
Command convention: examples use heike in PATH. If you run from local build artifacts, use ./heike.
curl -fsSL https://raw.githubusercontent.com/harunnryd/heike/main/install.sh | shPath A: Binary Install + OpenAI API Key
heike config init
export OPENAI_API_KEY="your-key"
heike runIf you use Anthropic, Gemini, ZAI, Ollama, or Codex OAuth, keep the same run flow and replace only auth/model setup using Provider Setup Paths.
Path B: Binary Install + OpenAI Codex OAuth
heike config init
heike provider login openai-codex
heike runPath C: Build From Source
go build -o heike ./cmd/heike
./heike config init
OPENAI_API_KEY="your-key" ./heike runPath D: Daemon Smoke
heike config init
heike daemon --workspace default
curl -fsS http://127.0.0.1:8080/healthPath E: Docker Run
docker build -t heike:local .
docker run --rm -p 8080:8080 -e OPENAI_API_KEY="your-key" heike:localCommon REPL commands:
/help
/approve <approval_id>
/deny <approval_id>
/clear
/exit
Provider Setup Paths
Heike supports multiple provider/auth paths. You are not limited to OPENAI_API_KEY.
Always keep models.default aligned with a model name defined in models.registry.
- OpenAI API:
provider: openaiviaOPENAI_API_KEY. - Anthropic API:
provider: anthropicviaANTHROPIC_API_KEY. - Gemini API:
provider: geminiviaGEMINI_API_KEY. - ZAI API:
provider: zaiviaZAI_API_KEY. - Local Ollama:
provider: ollamaviabase_url(optionalapi_key). - OpenAI Codex:
provider: openai-codexvia OAuth token file (heike provider login openai-codex).
flowchart TD
Q["What auth path do you want?"] --> R["Managed provider"]
Q --> L["Local provider"]
R --> O["OAuth token flow"]
R --> K["Static API key"]
O --> C1["provider: openai-codex"]
C1 --> C2["Run: heike provider login openai-codex"]
K --> K1["provider: openai | anthropic | gemini | zai"]
K1 --> K2["Set env key for selected provider"]
L --> L1["provider: ollama"]
L1 --> L2["Set models.registry.base_url"]API Key Providers
OpenAI:
export OPENAI_API_KEY="..."
heike runAnthropic:
export ANTHROPIC_API_KEY="..."
heike runGemini:
export GEMINI_API_KEY="..."
heike runZAI:
export ZAI_API_KEY="..."
heike runOnly set keys for providers you actually use.
OpenAI Codex OAuth (No Static API Key)
heike provider login openai-codex
heike config viewRelevant config keys:
auth.codex.callback_addrauth.codex.redirect_uriauth.codex.oauth_timeoutauth.codex.token_path
Local Ollama Setup
models:
default: local-llama
registry:
- name: local-llama
provider: ollama
base_url: http://localhost:11434/v1
api_key: ollamaAdapter Setup (Slack and Telegram)
Current Beta Runtime Status
heike run: CLI ingress + CLI egress.heike daemon: component lifecycle + scheduler + health endpoint.- Slack/Telegram adapter implementations exist in
internal/adapter, and config schema is ready inconfig.yaml. - Auto-wiring of Slack/Telegram adapters into daemon startup is not enabled by default yet in this beta branch.
flowchart LR
subgraph INPUT["Input Adapters"]
C["CLI Adapter (enabled)"]
S["Slack Adapter (available)"]
T["Telegram Adapter (available)"]
end
subgraph CORE["Runtime Pipeline"]
I["Ingress Queue"]
W["Workers"]
K["Orchestrator Kernel"]
E["Egress"]
end
subgraph OUTPUT["Output Adapters"]
C2["CLI Output (enabled)"]
S2["Slack Output (available)"]
T2["Telegram Output (available)"]
end
C --> I
S -. optional wiring .-> I
T -. optional wiring .-> I
I --> W --> K --> E
E --> C2
E -. optional wiring .-> S2
E -. optional wiring .-> T2Slack Config Baseline
adapters:
slack:
enabled: true
port: 3000
# signing_secret: "..."
# bot_token: "xoxb-..."Environment override equivalents:
export HEIKE_ADAPTERS_SLACK_ENABLED=true
export HEIKE_ADAPTERS_SLACK_PORT=3000
export HEIKE_ADAPTERS_SLACK_SIGNING_SECRET="..."
export HEIKE_ADAPTERS_SLACK_BOT_TOKEN="xoxb-..."Slack event endpoint in adapter implementation: POST /slack/events.
Telegram Config Baseline
adapters:
telegram:
enabled: true
update_timeout: 60
# bot_token: "..."Environment override equivalents:
export HEIKE_ADAPTERS_TELEGRAM_ENABLED=true
export HEIKE_ADAPTERS_TELEGRAM_UPDATE_TIMEOUT=60
export HEIKE_ADAPTERS_TELEGRAM_BOT_TOKEN="..."Telegram adapter uses long polling.
Deterministic Execution Contract
Every task executes through the same fixed cognitive loop:
planthinkact(only when tool calls are required)reflect
This contract is enforced by the orchestrator kernel so the runtime stays reproducible across run and daemon.
End-to-End Request Flow
flowchart TD
U["User Input"] --> I["Ingress Event"]
I --> W["Worker Lane (interactive/background)"]
W --> O["Orchestrator Kernel"]
O --> C["Cognitive Loop: plan -> think -> act -> reflect"]
C --> T["Tool Runner"]
T --> P["Policy Engine"]
P -->|allow| X["Execute Tool"]
P -->|approval required| A["Approval Queue"]
A --> CMD["/approve <id> or /deny <id>"]
CMD --> X
X --> R["Reflection + Final Response"]
R --> E["Egress / Transcript Persist"]Worker Lanes (Interactive vs Background)
Heike ingests events into two independent lanes with different operational intent:
- Interactive lane: user-facing requests that should be processed with low latency.
- Background lane: scheduled/system workloads that should not block interactive UX.
flowchart LR
U["User Message"] --> IQ["Interactive Queue"]
S["Scheduler/Cron/System Event"] --> BQ["Background Queue"]
IQ --> IW["Interactive Worker"]
BQ --> BW["Background Worker"]
IW --> K["Orchestrator Kernel"]
BW --> K
K --> E["Egress + Store"]Execution properties:
- Interactive submit path uses
ingress.interactive_submit_timeoutfor backpressure control. - Queue capacities are isolated (
interactive_queue_sizeandbackground_queue_size). - Both workers share the same deterministic kernel contract, but run on separate event channels.
- Graceful shutdown drains ingress using
ingress.drain_timeoutandingress.drain_poll_interval.
Recommended baseline:
ingress:
interactive_queue_size: 100
background_queue_size: 1000
interactive_submit_timeout: 500ms
drain_timeout: 5s
drain_poll_interval: 100ms
worker:
shutdown_timeout: 30sMinimal Config Baseline
After heike config init, this is the minimum shape you should verify in ~/.heike/config.yaml:
models:
default: gpt-5.2-codex
registry:
- name: gpt-5.2-codex
provider: openai-codex
governance:
require_approval:
- exec_command
- write_stdin
- apply_patch
auto_allow:
- time
- search_query
- open
- click
- find
- weather
- finance
- sports
- image_query
- screenshot
orchestrator:
max_sub_tasks: 5
max_tools_per_turn: 8Provider credentials are read from environment variables:
OPENAI_API_KEY, ANTHROPIC_API_KEY, GEMINI_API_KEY, ZAI_API_KEY.
Governance and Approval Workflow
Policy is enforced in the tool runner path, not only in prompting.
heike policy show
heike policy set exec_command --require-approval
heike policy set time --allowWhen a guarded tool is requested:
- Runtime returns an approval ID.
- Operator resolves via
/approve <approval_id>or/deny <approval_id>. - The exact decision is persisted in workspace governance state.
Skill Runtime Model
Skill structure:
<skill>/
SKILL.md
tools/
tools.yaml
scripts...
Resolution order:
<workspace_path>/skills$HOME/.heike/skills$HOME/.heike/workspaces/<workspace-id>/skills<workspace_path>/.heike/skills
If tools/tools.yaml is absent, Heike falls back to script discovery in tools/.
Persistence Model
Workspace data is stored in:
~/.heike/workspaces/<workspace_id>/
Important files:
workspace.locksessions/index.jsonsessions/<session_id>.jsonlgovernance/approvals.jsongovernance/domains.jsongovernance/processed_keys.jsonscheduler/tasks.json
This is what makes idempotency, approval flow, and transcript replay deterministic.
Run Modes
- Interactive REPL (
heike run): local development and manual agent workflows. - Service/Daemon (
heike daemon): long-running operations with health checks and scheduling.
flowchart LR
subgraph RUN["heike run"]
R1["Build runtime components"] --> R2["Start kernel + workers + scheduler"] --> R3["Start REPL loop"]
end
subgraph DAEMON["heike daemon"]
D1["Build daemon component graph"] --> D2["Start components by dependency"] --> D3["Serve /health + process events"]
end
R3 --> I["Ingress"]
D3 --> ITooling Highlights
- Core execution:
exec_command,write_stdin,apply_patch. - Web and data:
search_query,open,click,find,weather,finance,sports,time,image_query. - Local interaction:
view_image,screenshot.
Full contracts:
Quality Gates
Minimum checks before merge:
./scripts/ci/agents_guard.sh
go test ./...
go build ./cmd/heike
go vet ./...Documentation Map
Start
Runtime Core
Domain Deep Dives
- Domain Overview
- Event Pipeline Domain
- Model Domain
- Policy and Tool Runner Domain
- Executor Domain
- Sandbox Domain
- Skill Runtime Domain
Reference and Ops
- Skill System
- Configuration
- Command Reference
- Governance and Approvals
- Provider and Auth
- Workspace Layout
- Testing
- Release Checklist
- Container and Release
- Repository Readiness
