41 results for “topic:ai-guardrails”
Packmind seamlessly captures your engineering playbook and turns it into AI context, guardrails, and governance.
🔥🔥🔥 Enterprise AI middleware, alternative to unifyapps, n8n, lyzr
FSPEC: The Spec-Driven, Multi-Agent Coding Factory. It is infrastructure for the "Dark Factory"—the emerging model of fully autonomous software development where AI agents handle all implementation while humans focus on defining what to build and why.
Middleware layer for Pydantic AI — intercept, transform & guard agent calls with 7 lifecycle hooks, parallel execution, async guardrails, conditional routing, and tool-level permissions.
Validate that supporting text quotes in your data actually appear in their cited references
Mechanical enforcement tools to prevent AI agents from bypassing established project standards.
A Python implementation of the VETTING (Verification and Evaluation Tool for Targeting Invalid Narrative Generation) framework for LLM safety and educational applications.
Give AI coding agents (Claude Code, Cursor, Aider, Codex) a structured autonomous loop with guardrails — boundaries, 5 verification gates, 3-layer self-reflection, and autonomous remediation. pip install ouro-loop. Zero dependencies.
Real-time AI safety guardrails for LLM apps. 10 scanners: prompt injection, PII, harmful content, code vulnerabilities, obfuscation detection. Sub-ms latency. Python + TypeScript SDKs. MCP proxy. Claude Code hooks.
LLM budget control and cost governance for AI agents. Python library for token budgets, usage limits and guardrails for OpenAI, Anthropic, LangChain, LangGraph and agentic systems.
L0: The Missing Reliability Substrate for AI. Streaming-first. Reliable. Replayable. Deterministic. Multimodal. Retries. Continuation. Fallbacks (provider & model). Consensus. Parallelization. Guardrails. Atomic event logs. Byte-for-byte replays.
paceval is a high-performance mathematical runtime for deterministic AI and energy-efficient edge computing.
Runtime guardrails for AI agents that enforce token budgets, loop limits, and tool rate limits locally.
L0: The Missing Reliability Substrate for AI. Streaming-first. Reliable. Replayable. Deterministic. Multimodal. Retries. Continuation. Fallbacks (provider & model). Consensus. Parallelization. Guardrails. Atomic event logs. Byte-for-byte replays.
Turn Claude Code from a confident junior into an autonomous senior developer. Behavioral rules, automated enforcement, and persistent learning.
A secure, governable AI gateway for Splunk with operational guardrails. An alternative to Splunk AI Assistant focused on safety, compliance, and predictable results using a 'Configuration as Code' approach."
Cross-agent hook manager for AI coding assistants — installs and orchestrates lifecycle hooks across Claude Code, Cursor, and Gemini from a single manifest.
Enable true asynchronous tool calls in agent frameworks with immediate acknowledgements and non-blocking result callbacks for continuous interaction.
GuardRail — Sovereign uptime monitoring. Fork of Uptime Kuma. Fleet health for 41 pages, 7 nodes, 18 domains. Proprietary BlackRoad OS.
DecipherGuard: Understanding and Deciphering Jailbreak Prompts for a Safer Deployment of Intelligent Software Systems
An educational example showing how to build a guardrailed, tool-augmented AI assistant in C# (.NET 10) using Ollama, with deterministic validation, tool constraints, timeouts, and safe fallbacks.
Deterministic governance engine for AI agents. Enforce rules defined in .md governance files across AI systems.
Modular and safe prompt templates for GPT agents (resume, HR, guardrails)
Advanced AI Agent playground with Gemini/GPT integration, supporting mocked/production RAG, history compression, and detailed data provenance for logic validation.
SpecGuard is a command-line tool that turns AI safety policies and behavioral guidelines into executable tests. Think of it as unit testing for your AI's output. Instead of trusting that your AI will follow the rules defined in a document, SpecGuard enforces them.
🤖 Build a guardrailed, tool-augmented AI assistant in C# with deterministic boundaries for safe, reliable outputs and local chat capabilities.
Three practical examples of AI guardrails: LLM guardrails, agent tool-calling authorization, and multi-agent governance patterns for enterprise AI systems.
Guardrails + audit-grade traces for tool-calling AI workflows
Perihelion is a robust AI Firewall designed to sit between your users and Large Language Models (LLMs). It acts as a protective middleware, sanitizing inputs, enforcing security policies, and detecting adversarial attacks in real-time.
Practical guide to Claude Code's six guardrail layers with ready-to-use examples