Francesco Marinoni Moretto
frmoretto
AI Systems Architect / Engineer | Building tools for AI safety & epistemic quality | Creator of Stream Coding, Clarity Gate, Hardstop, ArXiParse | Tech founder
Languages
Top Repositories
Stop "vibe coding" and start Stream Coding, the 10-20x velocity methodology for AI-accelerated development. Includes the official SKILL.md for Claude/Cursor/Windsurf and the complete Manifesto.
Stop LLMs from hallucinating your guesses as facts. Clarity Gate is a verification protocol for your documents that are going to be provided to LLMs or RAG systems. Place automatically the missing uncertainty markers to avoid confident hallucinations. HITL for non-directly verifiable claims.
Don't let AI destroy your hard work! HardStop is a rock-solid protection for AI-generated commands. Pre-execution safety validation for Claude Code, Claude Cowork. Catches dangerous commands before they run: whether from AI mistakes, hallucinations, prompt injection, or misunderstood instructions. Seatbelts for the agentic AI era.
The definitive guide to Claude's memory_user_edits - an officially documented but practically undiscovered feature. Based on systematic research and controlled testing with the 5Levels platform. Results: 62% error reduction, up to 100% accuracy improvement. Includes testing methodology and production best practices.
Decision memory and session logging for AI-assisted development. Track WHY decisions are made, not just WHAT changed. Works with Claude, Cursor, Roo Code, and other AI coding assistants.
428 regex patterns for detecting dangerous commands and credential file access. Data layer for HardStop and compatible AI safety tools.
Repositories
25Stop "vibe coding" and start Stream Coding, the 10-20x velocity methodology for AI-accelerated development. Includes the official SKILL.md for Claude/Cursor/Windsurf and the complete Manifesto.
Don't let AI destroy your hard work! HardStop is a rock-solid protection for AI-generated commands. Pre-execution safety validation for Claude Code, Claude Cowork. Catches dangerous commands before they run: whether from AI mistakes, hallucinations, prompt injection, or misunderstood instructions. Seatbelts for the agentic AI era.
No description provided.
Stop LLMs from hallucinating your guesses as facts. Clarity Gate is a verification protocol for your documents that are going to be provided to LLMs or RAG systems. Place automatically the missing uncertainty markers to avoid confident hallucinations. HITL for non-directly verifiable claims.
Cavalli al trotto
HACKAPIZZA 2025
RAGΓΉ team code for HackaPizza 2
Team AI Caramba
Autonomous restaurant agent for Hackapizza 2.0. Event-driven bot with datapizza agentic framework: @tool-based strategic analysis, cross-turn Memory, multi-agent orchestration, and adaptive pricing. Agent-as-advisor pattern: deterministic fallbacks ensure reliability while the AI layer optimizes bidding, menu composition, and serving priority.
No description provided.
The definitive guide to Claude's memory_user_edits - an officially documented but practically undiscovered feature. Based on systematic research and controlled testing with the 5Levels platform. Results: 62% error reduction, up to 100% accuracy improvement. Includes testing methodology and production best practices.
The first comprehensive reverse-engineering of Claude.ai's 28 internal tools. Full parameter schemas, platform-specific behavior across browser/desktop/mobile, and reproducible discovery methodology.
428 regex patterns for detecting dangerous commands and credential file access. Data layer for HardStop and compatible AI safety tools.
A curated list of Plugins that let you extend Claude Code with custom commands, agents, hooks, and MCP servers through the plugin system.
Claude skill & templates for creating epistemically honest Source of Truth documents. Prevents 6 failure modes: unqualified headers, unmarked estimates, self-assessment bias, absence-as-proof claims, illustrative-as-data, staleness. Companion to Clarity Gate.
Decision memory and session logging for AI-assisted development. Track WHY decisions are made, not just WHAT changed. Works with Claude, Cursor, Roo Code, and other AI coding assistants.
The awesome collection of Claude Skills and resources.
Public repository for Agent Skills
Official issue tracker for ArXiParse.org - Report bugs, request features, and help improve the arXiv-to-LLM semantic extraction tool
Voice-based LLM security testing framework with dual-AI verification. Tests prompt injection resistance, information control boundaries, and guardrail effectiveness. One AI enforces rules, another audits compliance. Practical tool for red-teaming and algorithmic accountability testing.
bloom - evaluate any behavior immediately Β πΈπ±
An alignment auditing agent capable of quickly exploring alignment hypothesis
Siberian Single App Edition (SAE), free and open-source app builder.
No description provided.
No description provided.