18 results for “topic:owasp-llm-top-10”
SecureClaw - Security Plugin and Skill for OpenClaw OWASP-Aligned
Code scanner to check for issues in prompts and LLM calls
The pre-flight check for AI agents
LMAP (large language model mapper) is like NMAP for LLM, is an LLM Vulnerability Scanner and Zero-day Vulnerability Fuzzer.
AI security and prompt injection payload toolkit
An intentionally vulnerable AI chatbot to learn and practice AI Security.
AIDEFEND MCP is a local-first AI Security Defensive Assistant that brings the full AIDEFEND countermeasure library into your environment and turns static knowledge into actionable protection for LLMs and agentic AI systems — privately, securely, and on-device.
Basilisk — Open-source AI red teaming framework with genetic prompt evolution. Automated LLM security testing for GPT-4, Claude, Gemini. OWASP LLM Top 10 coverage. 32 attack modules.
Security Scanner for AI Agents — 200 detection patterns across 9 categories, 15 languages, F1=98.0%. Prompt injection, jailbreaks, data exfiltration, social engineering. EU AI Act compliance mapping. Zero dependencies.
GenAI-ML-SecAudit is an implementation of OWASP 2025 Top 10 for LLMs and Gen AI Apps risks. The tool simulate attacks, capture logs, and generate an interactive HTML graph that visualizes the results.
The Citadel is not just a training platform; it is a battleground. As AI systems integrate deeper into our critical infrastructure, the attack surface expands exponentially. This application is a purpose-built LLM Pentesting Environment designed to simulate real-world threats against Large Language Models.
this is a discovery i made and reported through the proper security channels.
No description provided.
AI Security Maturity Model and assessment toolkit—secure models, data, LLM/RAG, infra, monitoring, and IR across 11 domains and 5 levels, aligned to NIST AI RMF, SAIF, and OWASP LLM Top 10.
Adversarial testing and red-teaming framework for enterprise LLM deployments. Covers OWASP LLM Top 10 across 11 attack modules, RAG poisoning, tool-call abuse, PII leakage, credential harvesting, hallucination, and more. Built to run in CI/CD pipelines.
Python tool that detects prompt injection attacks in user input before it reaches any AI API. 30+ regex patterns, 0-100 risk scoring, BLOCKED/WARNING/SAFE verdicts. Part of a 10-project AI security portfolio.
An advanced, interactive educational platform focused on AI system vulnerabilities, attack vectors, and offensive security methodologies. [Prompt Injection, Model Evasion, Data Poisoning, Agent Hijacking]
🛡️ Explore Concurrent Context Contamination (CCC) risks in LLM session management to enhance security and guide defensive engineering practices.