77 results for “topic:token-usage”
🛰️ A CLI tool for tracking token usage from OpenCode, Claude Code, 🦞OpenClaw (Clawdbot/Moltbot), Pi, Codex, Gemini, Cursor, AmpCode, Factory Droid, Kimi, and more! • 🏅Global Leaderboard + 2D/3D Contributions Graph
A beautiful, zero-dependency command center for OpenClaw AI agents
Real-time guardrail that shows token spend & kills runaway LLM/agent loops.
Find the ghost tokens. Fix them. Survive compaction. Avoid context quality decay.
Ultra-fast token & cost tracker for LLM Token Usage (e.g. Claude Code)
Local-first dashboard for Codex sessions with trends, token usage, tools, and word cloud | Codex 本地数据分析仪表盘
htop for your AI costs — real-time terminal monitoring of LLM token usage and spending across providers and coding agents
AI Coding stats dashboard — track costs, tokens, and efficiency across Claude Code / Gemini CLI / Codex / Cursor
Laravel AI Guard 🛡️💰🤖 - Control and optimize AI costs in Laravel AI SDK applications 🚀 Track OpenAI & LLM token usage 📊, estimate AI costs before execution ⚠️, enforce AI budgets 🧾, and prevent unexpected billing spikes in production 💥.
Control center for Claude Code & Codex — multi-model parallel sessions, Telegram remote control, scheduled cron tasks with push notifications, usage analytics, permission modes. CLI + native macOS desktop app.
Web dashboard for monitoring Claude Code usage and costs. Built with Next.js, ECharts, and SQLite.
Local proxy that tracks tokens, costs, prompts and responses across all your LLM API calls (with Claude Code support)
Blazing-fast Rust token usage tracker for Codex and Claude Code with unified reports, live monitoring, CLI/TUI/GUI dashboards.
Free, open-source Claude usage monitor for macOS & Raycast. Track token limits, burn rate, per-project costs, team budgets, and get alerts before you hit rate limits.
Real-time observability for OpenClaw agents — token usage, API cost, context health, and smart alerts. Zero config. 100% local.
Terminal UI for monitoring Claude Code token usage and costs
Local monitoring center for Codex CLI sessions: rolling 5-hour quota, token usage, estimated cost, and a NOC-style dashboard.
📊 Token Usage Dashboard — Beautiful visualization and analytics for LLM API consumption. Track, analyze, and optimize token usage across providers with 3D visualizations, cost tracking, and accurate token counting.
Fast CLI for Claude Code and OpenAI Codex token/cost usage analytics
Interactive terminal dashboard for Claude Code token usage and cost tracking — like htop, but for Claude AI
GLM Coding Plan 智能状态栏 - 帮助用户实时掌握套餐使用情况
One line to instrument your agent and capture every event in an immutable, queryable audit trail.
Monitor and analyze Claude Code token usage and costs in real-time with a terminal UI tailored for Max, Team, and Enterprise subscribers.
OpenCode plugin for monitoring token usage, estimating costs, and tracking analytics across AI coding sessions
Claude Code plugin for token-level cost intelligence — per-project, per-agent, per-skill usage tracking with local dashboard
Local viewer for Claude Code logs with CLI dashboard and token usage tracking
Find out what your Claude Code subscription is actually worth. Self-hosted dashboard for tracking token usage, API costs, quota utilization, and session analytics.
🛠️ Enhance Claude Code CLI’s refactoring with JetBrains IDEs, leveraging advanced semantic analysis for smarter code usage handling.
Lightweight X11/tmux-based token usage visualization for LLM systems. Simulates and monitors token consumption patterns using Unix tools (xload + named pipes) as a local development alternative to CloudWatch/Prometheus metrics. Useful for developing intuition about token usage patterns without deploying full observability stacks.
Codex usage analytics CLI that scans ~/.codex/sessions, aggregates tokens by day or model, and estimates cost using OpenRouter pricing.