11 results for “topic:token-counting”
No description provided.
🚀 Intelligent Claude Code status line with multi-provider AI support, real-time token counting, and universal model compatibility. Supports Claude (Sonnet 4: 1M, 3.5: 200K), OpenAI (GPT-4.1: 1M, 4o: 128K), Gemini (1.5 Pro: 2M, 2.x: 1M), and xAI Grok (3: 1M, 4: 256K) with verified 2025 context limits.
Smart context window management for LLM conversations
A local proxy that converts websites and APIs to clean Markdown. Convert HTML pages, JSON APIs, and dynamic sites. Get token counts for LLM budgeting.
ttok-style token counting for Amazon Bedrock
Compare LLM API costs instantly. npx llm-costs "your prompt" --compare across 17 models. Auto-updating pricing.
⚡ Production-grade real-time AI cost enforcement system. Sub-5ms balance checks, atomic operations, gRPC + REST APIs. Stop AI overages before they happen.
Production-grade PowerShell module bundler. Integrated with Pester for automated testing and GitHub Actions for stable CI/CD pipelines.
A blazing-fast BPE tokenizer for LLMs. Drop-in tiktoken replacement, 20-80x faster.
Smart context management for LLMs
Display key status details for Claude Code including model, context, limits, git info, and session time across macOS, Linux, and Windows.