REASY/k8s-ariadne-rs
Query Kubernetes with natural language by compiling English to Cypher. No context window bloat. Powered by Memgraph, Rust, and LLMs.
Ariadne
Ariadne turns Kubernetes state into a Memgraph property graph and lets you query it
with Cypher or natural language. This repo includes the Rust ingester + MCP server,
a Python NL -> Cypher agent with AST validation, and an eval harness.
Ariadne is a Greek mythology nod to finding a path through complex systems.
TL;DR
- LLMs generate Cypher, not answers
- Memgraph executes queries over full cluster state
- Schema + AST validation guarantees correctness
- Context size stays tiny, regardless of cluster size
Ariadne turns natural language questions into compact, schema-valid graph queries, instead of bloated JSON prompts.
Demo
cli.mp4
The problem: JSON does not scale
Most "LLM + Kubernetes" tools do this:
Kubernetes API → giant JSON/YAML → LLM context → hope for the best
This breaks down fast:
- Context windows explode with cluster size
- Token costs scale with data, not intent
- Models pattern-match instead of query
- Answers are heuristic and non-reproducible
JSON is the wrong abstraction for querying systems.
The Ariadne approach
Ariadne flips the model from "LLM answers" to "LLM generates queries."
User question
↓
LLM (intent → Cypher)
↓
AST validator (schema + Cypher rules)
↓
Retry on validation error (LLM self‑corrects)
↓
GraphDB (Memgraph)
↓
Deterministic query execution on the current graph state; query generation is probabilistic.
Key idea:
- The LLM does not see the raw cluster state.
- It only generates a Cypher, then interprets the query results.
Why Cypher + GraphDB
Kubernetes is fundamentally a relationship graph:
- Ingress → Service → EndpointSlice → Pod
- Pod → Node → Namespace
- NetworkPolicy → PodSelector → Pod
Cypher encodes this structure naturally:
- Orders of magnitude smaller than JSON
- Explicit relationships and direction
- Statically validatable (edges, labels, direction)
Example Cypher generated by Ariadne to answer a question "What are the pods backing DNS name litmus.qa.agoda.is?":
MATCH
(h:Host)-[:IsClaimedBy]->(:Ingress)
-[:DefinesBackend]->(:IngressServiceBackend)
-[:TargetsService]->(:Service)
-[:Manages]->(:EndpointSlice)
<-[:ListedIn]-(ea:EndpointAddress)
-[:IsAddressOf]->(p:Pod)
WHERE
h.name = "litmus.qa.agoda.is"
RETURN DISTINCT
p.metadata.namespace AS namespace,
p.metadata.name AS pod_name
ORDER BY
namespace, pod_nameThis query stays small no matter how big the cluster is.
Quick start
1) Start Memgraph (local)
docker compose up -dMemgraph listens on localhost:7687 and Memgraph Lab on localhost:3000.
2) Run the Rust app (graph + MCP server)
CLUSTER=<cluster> \
KUBE_CONTEXT=<context> \
cargo run --release -p ariadne-mcpThe app:
- builds the graph in Memgraph
- exposes HTTP endpoints (including MCP)
3) Query with the GUI CLI (no Memgraph required)
The CLI ships with an in-memory graph backend, so it works without running Memgraph.
LLM_BASE_URL=... \
LLM_MODEL=... \
LLM_API_KEY=... \
cargo run --release -p ariadne-cli -- --cluster <cluster>4) Ask questions with the Python agent
cd python/agent
uv venv
uv syncMCP_URL=http://localhost:8080/mcp \
LLM_MODEL=openai/gpt-5.2 \
k8s-graph-agent --use-adk "What are the pods backing DNS name litmus.qa.agoda.is?"Docs
- Architecture: docs/architecture.md
- Development & build: docs/development.md
- Snapshots: docs/snapshots.md
- Python agent + eval harness: python/agent/README.md
- CLI: ariadne-cli/README.md
Repo structure
ariadne-core/- core graph + Memgraph integrationariadne-cli/- GUI client with in-memory graph backendariadne-mcp/- K8s ingestion + MCP + HTTP serverariadne-tools/- schema generation toolingpython/agent/- ADK agent, AST validator, eval harness
License
See LICENSE.