93 results for “topic:llm-gateway”
🦍 The API and AI Gateway
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, VLLM, NVIDIA NIM]
An open-source AI-first Identity and Access Management (IAM) /AI MCP gateway and auth server with web UI supporting MCP, A2A, OAuth 2.1, OIDC, SAML, CAS, LDAP, SCIM, WebAuthn, TOTP, MFA, Face ID, Google Workspace, Azure AD
A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with 1 fast & friendly API.
🚀 Next Generation Multi-tenant AI One-Stop Solution. Builtin Admin & Billing System. Enterprise-Grade Unified LLM Gateway Support for 200+ Models And 35+ Providers, Load Balacing w/ Priority-base Routing, Cost Management, Chat Share, Cloud Sync, Credit/Subscription Billing, All File Parsing, Web Search, Built-in Model Cache.
Plano is an AI-native proxy and data plane for agentic apps — with built-in orchestration, safety, observability, and smart LLM routing so you stay focused on your agents core logic.
Fastest enterprise AI gateway (50x faster than LiteLLM) with adaptive load balancer, cluster mode, guardrails, 1000+ models support & <100 µs overhead at 5k RPS.
One Hub All LLMs For You | 为个人打造的 LLM API 聚合服务
Cloud native, ultra-high performance AI&API gateway, LLM API management, distribution system, open platform, supporting all AI APIs.🦄云原生、超高性能 AI&API网关,LLM API 管理、分发系统、开放平台,支持所有AI API,不限于OpenAI、Azure、Anthropic Claude、Google Gemini、DeepSeek、字节豆包、ChatGLM、文心一言、讯飞星火、通义千问、360 智脑、腾讯混元等主流模型,统一 API 请求和返回,API申请与审批,调用统计、负载均衡、多模型灾备。一键部署,开箱即用。
Route, manage, and analyze your LLM requests across multiple providers with a unified API interface.
Debug your AI agents
AI Gateway: Claude Pro, Copilot, Gemini subscriptions → OpenAI/Anthropic/Gemini APIs. No API keys needed.
Open-source LLM router & AI cost optimizer. Routes simple prompts to cheap/local models, complex ones to premium — automatically. Drop-in OpenAI-compatible proxy for Claude Code, Codex, Cursor, OpenClaw. Saves 40-70% on AI API costs. Self-hosted, no middleman.
Bella OpenAPI是一个提供了丰富的AI调用能力的API网关,可类比openrouter,与之不同的是除了提供聊天补全(chat-completion)能力外,还提供了文本向量化(text-embedding)、语音识别(ASR)、语音合成(TTS)、文生图、图生图等多种AI能力,同时集成了计费、限流和资源管理功能。且集成的所有能力都经过了大规模生产环境的验证。
Build mods for Claude Code: Hook any request, modify any response, /model "with-your-custom-model", intelligent model routing using your logic or ours
Modular, open source LLMOps stack that separates concerns: LiteLLM unifies LLM APIs, manages routing and cost controls, and ensures high-availability, while Langfuse focuses on detailed observability, prompt versioning, and performance evaluations.
OpenAI-compatible HTTP LLM proxy / gateway for multi-provider inference (Google, Anthropic, OpenAI, PyTorch). Lightweight, extensible Python/FastAPI—use as library or standalone service.
Lightweight distributed LLM gateway w/ web UI for model mgmt & routing. Supports vibe programming, prompt opt., & optimized OpenAI API/Anthropic calls
Prompt management gateway with a UI for AI-powered applications. Enables non-technical users to test, refine, and deploy prompts seamlessly across multiple AI providers.
The reliability layer between your code and LLM providers.
A personal LLM gateway with fault-tolerant capabilities for calls to LLM models from any provider with OpenAI-compatible APIs. Advanced features like retry, model sequencing, and body parameter injection are also available. Especially useful to work with AI coders like Cline and RooCode and providers like OpenRouter.
High-performance AI gateway written in Go - unified OpenAI-compatible API for OpenAI, Anthropic, Gemini, Groq, xAI & Ollama. LiteLLM alternative with observability, guardrails & streaming.
Open-source AI Gateway written in Go, one API for OpenAI, Anthropic, Bedrock, Azure, and 100+ LLMs. Built-in caching, guardrails, retries, and cost optimization. Run as a proxy or embed as a library.
AI Gateway as a Framework. Full control over models, routing & lifecycle. OpenAI-compatible /chat/completions, /embeddings & /models endpoints.
🧭🧭 An intelligent load balancer with smart scheduling that unifies diverse LLMs.
A self-hosted AI toolkit running locally via Docker Compose, bundling an LLM gateway, workflow automation, and a chat UI — all backed by a shared PostgreSQL database.
Connect, setup, secure and seamlessly manage LLM models using an Universal/OpenAI compatible API
A lightweight, DSL-driven LLM gateway for routing, patching provider quirks, and normalizing APIs across channels
An open-source, security-first LLM Gateway designed to provide a unified, secure, and observable entry point to any Large Language Model.
Headless, OpenAI-compatible AI gateway in Go. Multi-tenant auth, tracing, cost tracking, rate limits, and optional PII redaction. Single binary, self-hosted, audit-ready by design.