markpollack/loopy
Loop-driven interactive coding agent CLI — MiniAgent + tui4j + forge lifecycle
Loopy
A loop-driven interactive coding agent CLI for Java developers. Built on Spring AI with an embedded agent loop, modern terminal UI, and domain skills.
Loopy is the entry point for knowledge-directed execution — the idea that curated domain knowledge + structured execution contribute more to agent quality than model choice alone.
Highlights
- Skills — domain knowledge that makes agents smarter. Discover, install, and create skills from a curated catalog of 23+ skills. Skills follow the agentskills.io spec and work in any agentic CLI
- Multi-provider — Anthropic (default), OpenAI, Google Gemini. Switch with
--provider - Three modes — interactive TUI, single-shot print (
-p), REPL (--repl) - Cost visibility — per-turn token usage and estimated cost
- Context compaction — automatic summarization for long sessions
- CLAUDE.md auto-injection — project-specific instructions, same convention as Claude Code
- Forge agent — scaffold agent experiment projects from YAML briefs
Documentation
- Getting Started — installation, API key setup, first session
- CLI Reference — all commands, flags, and modes
- Configuration — environment variables, model selection, custom endpoints
- Architecture — four-layer design, data flow, quality guardrails
- Forge Agent — scaffolding experiment projects with
/forge-agent
Prerequisites
- Java 21+ — check with
java -version. Install via SDKMAN:sdk install java 21.0.9-librca - API key for at least one provider:
| Provider | Environment Variable | Get a key |
|---|---|---|
| Anthropic (default) | ANTHROPIC_API_KEY |
console.anthropic.com |
| OpenAI | OPENAI_API_KEY |
platform.openai.com |
| Google Gemini | GOOGLE_API_KEY |
aistudio.google.com |
export ANTHROPIC_API_KEY=sk-ant-...Add the export to your shell profile (~/.bashrc, ~/.zshrc) to persist across sessions.
Quick Start
Download fat JAR (easiest)
curl -LO https://github.com/markpollack/loopy/releases/download/v0.2.0/loopy-0.2.0-SNAPSHOT.jar
java -jar loopy-0.2.0-SNAPSHOT.jarBuild from source
git clone https://github.com/markpollack/loopy.git
cd loopy
./mvnw package
java -jar target/loopy-0.2.0-SNAPSHOT.jarExecution Modes
Loopy supports three modes for different workflows:
Interactive TUI (default)
Full terminal UI with chat history, async agent calls, and spinner animation:
loopyType messages to chat with the agent. Use slash commands (see below) for built-in operations. The TUI uses an Elm Architecture via tui4j.
Print Mode (single-shot)
Run a single task and exit — useful for scripting and CI:
loopy -p "refactor this class to use the builder pattern"Output goes to stdout, status to stderr. Exit code 0 on success, 1 on failure.
REPL Mode (readline)
Simple line-by-line prompt for testing and quick tasks:
loopy --replCLI Options
| Flag | Description | Default |
|---|---|---|
-d, --directory <path> |
Working directory | Current directory |
-m, --model <name> |
Model to use | Per-provider (see Configuration) |
-t, --max-turns <n> |
Maximum agent loop iterations | 20 |
-p, --print <prompt> |
Single-shot print mode | — |
--provider <name> |
AI provider: anthropic, openai, google-genai |
anthropic |
--base-url <url> |
Custom API base URL (vLLM, LM Studio) | — |
--debug |
Verbose agent activity (turns, tool calls, cost) on stderr | — |
--repl |
REPL mode (readline loop) | — |
--help |
Print usage information | — |
--version |
Print version | — |
Slash Commands
In TUI and REPL modes, lines starting with / are intercepted before reaching the agent — no LLM tokens wasted.
| Command | Description |
|---|---|
/help |
List available commands |
/clear |
Clear session memory (start fresh conversation) |
/quit |
Exit Loopy |
/skills |
Discover, search, install, and manage domain skills |
/forge-agent --brief <path> |
Bootstrap an agent experiment project from a YAML brief |
Features
Skills (Domain Knowledge)
Skills are curated knowledge packages that make agents smarter. They follow the agentskills.io specification and work in 40+ agentic CLIs — not just Loopy.
loopy --repl
> /skills # List installed and discovered skills
> /skills search testing # Search the curated catalog
> /skills info systematic-debugging # Skill details + install instructions
> /skills add systematic-debugging # Install to ~/.claude/skills/
> /skills remove systematic-debugging # UninstallLoopy discovers skills from three sources:
| Source | Path | How it gets there |
|---|---|---|
| Project | .claude/skills/*/SKILL.md |
Team-shared skills checked into the repo |
| Global | ~/.claude/skills/*/SKILL.md |
/skills add from the curated catalog |
| Classpath | META-INF/skills/SKILL.md in JARs |
Maven dependency |
Progressive disclosure — the agent sees skill names and descriptions, and loads full content only when relevant to the task. No tokens wasted on unused skills.
Curated catalog — 23 skills from 8 publishers covering testing, debugging, security, code review, architecture, and more. Run /skills search to browse.
Creating Custom Skills
Place a SKILL.md file in .claude/skills/<name>/ (project) or ~/.claude/skills/<name>/ (global):
---
name: my-team-conventions
description: Coding conventions for my team's Spring Boot codebase
---
# Instructions
When working on this codebase:
- Use constructor injection, never field injection
- All REST endpoints return ProblemDetail for errors
- Tests use @WebMvcTest with MockMvcCLAUDE.md Auto-Injection
Loopy automatically reads CLAUDE.md from the working directory and appends it to the agent's system prompt. This gives the agent project-specific context without any CLI flags — the same convention used by Claude Code.
Cost Visibility
Every response shows token usage and estimated cost:
tokens: 1234/567 | cost: $0.0089
Context Compaction
Long sessions can exceed the model's context window. Loopy automatically summarizes older messages when estimated tokens exceed 50% of the model limit, using a cheap model from the same provider (e.g., Haiku for Anthropic, gpt-4o-mini for OpenAI, gemini-2.5-flash-lite for Gemini). This reuses the same API credentials via a per-request model override.
Agent Tools
The embedded agent has access to:
| Tool | Description |
|---|---|
bash |
Execute shell commands |
Read |
Read file contents |
Write |
Create or overwrite files |
Edit |
Make targeted edits to existing files |
Glob |
Find files by pattern |
Grep |
Search file contents |
Skill |
Load domain skills for task-relevant expertise |
Submit |
Submit final answer (ends the agent loop) |
TodoWrite |
Track work items |
Task |
Delegate to sub-agents |
AskUserQuestionTool |
Ask the user for clarification (TUI only) |
WebSearch |
Web search (requires BRAVE_API_KEY) |
WebFetch |
Fetch and summarize web pages (requires BRAVE_API_KEY) |
Echo Mode (no API key)
If no API key is set for the selected provider, the TUI launches in echo mode — useful for testing the UI and slash commands without consuming API tokens.
Programmatic API
Embed Loopy's agent in other Java projects via LoopyAgent:
LoopyAgent agent = LoopyAgent.builder()
.model("claude-haiku-4-5-20251001")
.workingDirectory(workspace)
.systemPrompt("You are a Spring Boot expert.")
.maxTurns(80)
.build();
// Single task
LoopyResult result = agent.run("add input validation to UserController");
// Multi-step with session memory (context preserved across calls)
LoopyResult plan = agent.run("plan the refactoring of the service layer");
LoopyResult act = agent.run("now execute the plan");Builder Options
| Method | Description | Default |
|---|---|---|
.model(String) |
Anthropic model ID | claude-sonnet-4-6 |
.workingDirectory(Path) |
Agent's working directory | required |
.systemPrompt(String) |
Custom system prompt | Built-in coding prompt |
.maxTurns(int) |
Max agent loop iterations | 80 |
.apiKey(String) |
API key override | ANTHROPIC_API_KEY env var |
.baseUrl(String) |
Custom API endpoint (vLLM, LM Studio) | Anthropic default |
.sessionMemory(boolean) |
Preserve context across run() calls |
true |
.compactionModelName(String) |
Model for compaction (null to disable) | claude-haiku-4-5-20251001 |
.compactionThreshold(double) |
Fraction of context limit triggering compaction | 0.5 |
.contextLimit(int) |
Token limit for compaction calculation | 200,000 |
.costLimit(double) |
Max cost in dollars before stopping | $5.00 |
.commandTimeout(Duration) |
Timeout for individual tool calls (bash, file ops) | 120s |
.timeout(Duration) |
Overall agent loop timeout | 10 min |
.disabledTools(Set<String>) |
Tools to exclude by name | none |
Note: The programmatic API defaults to
claude-sonnet-4-6via Anthropic only. CLI mode defaults are set per-provider inapplication.yml(Anthropic:claude-sonnet-4-20250514, OpenAI:gpt-4o, Gemini:gemini-2.5-flash).
Custom Endpoints
Loopy supports Anthropic-compatible API endpoints (vLLM, LM Studio, etc.) via the .baseUrl() builder method. HTTP/1.1 fallback is applied automatically for custom endpoints:
LoopyAgent agent = LoopyAgent.builder()
.baseUrl("http://localhost:1234/v1")
.apiKey("lm-studio")
.model("local-model")
.workingDirectory(workspace)
.build();Architecture
Single Maven module with four layers:
io.github.markpollack.loopy
├── agent/ MiniAgent loop, AgentLoopAdvisor, BashTool, SkillsTool, observability
├── tui/ ChatScreen (Elm Architecture via tui4j), ChatEntry
├── command/ SlashCommand interface, registry, /help, /clear, /quit, /skills
└── forge/ ExperimentBrief, TemplateCloner, TemplateCustomizer, /forge-agent
- Agent layer — MiniAgent is an embedded agent loop (copied from agent-harness, evolving independently). It runs a think-act-observe loop with tool calling. SkillsTool provides progressive skill discovery.
- TUI layer — Elm Architecture UI via tui4j. Agent calls run on a background thread; a spinner animates while waiting; Enter is gated to prevent overlapping calls.
- Command layer — Slash commands are intercepted in
ChatScreen.submitInput()before reaching the agent. New commands implement theSlashCommandinterface and register inSlashCommandRegistry. - Forge layer —
/forge-agentscaffolds agent experiment projects from YAML briefs: clone template, rename packages, update POM, generate config and README.
License
Business Source License 1.1 — free for non-production and limited production use. Converts to Apache 2.0 on the Change Date.