76 results for “topic:aisecurity”
CodeGate: Security, Workspaces and Multiplexing for AI Agentic Frameworks
AiScan-N 来了!这是一款基于人工智能驱动的Ai自动化网络安全(运维)工具,专注于网络安全评估、漏洞扫描、运维、应急响应、渗透测试自动化,Ai大模型工具集【CLI Agent】 ,Ai驱动的安全检测技术,提升安全测试(运维)效率,专为企业和个人用户打造,尤其适合初学者(小白)快速上手使用,让你轻松迈入智能安全攻防时代!适用场景 :如(红队演练、CTF比赛、Web应用渗透测试、内网横向移动、密码破解与暴力攻击、流量分析与威胁检测、APT攻击模拟、漏洞赏金挑战等场景)🎥演示视频(文章中):https://mp.weixin.qq.com/s/7lsUdbrxkDy4P5pZhEWv7Q
Here Comes the AI Worm: Preventing the Propagation of Adversarial Self-Replicating Prompts Within GenAI Ecosystems
CTF challenges designed and implemented in machine learning applications
Move from idea to production in hours with policy-driven autonomous AI agents. Unified Control Plane: Centralised tools, MCPs, models, data, and policies with consistent observability and governance.
The AI Security Verification Standard (AISVS) focuses on providing developers, architects, and security professionals with a structured checklist to verify the security of AI-driven applications.
AI runtime inventory: discover shadow AI, trace LLM calls
A collection list for Large Language Model (LLM) Watermark
An interactive CLI application for interacting with authenticated Jupyter instances.
一款集合了常见的漏洞练习平台,利用Ai对靶场进行自动化渗透测试!
Powerful LLM Query Framework with YAML Prompt Templates. Made for Automation
[COLM 2025] JailDAM: Jailbreak Detection with Adaptive Memory for Vision-Language Model
A hybrid AI honeypot for monitoring large scale web attacks
AI Goat - Learn AI security by attacking and defending a real AI-powered e-commerce application. Built for Red Teamers, security researchers, AI enthusiasts, and students to learn about adversarial attacks on AI/LLM systems. It is strictly for educational use, and the authors disclaim responsibility for any misuse.
An open-source guide to Python for AI and Machine Learning
🤯 AI Security EXPOSED! Live Demos Showing Hidden Risks of 🤖 Agentic AI Flows: 💉Prompt Injection, ☣️ Data Poisoning. Watch the recorded session:
Securing LLM's Against Top 10 OWASP Large Language Model Vulnerabilities 2024
CyberBrain_Model is an advanced AI project designed for fine-tuning the model `DeepSeek-R1-Distill-Qwen-14B` specifically for cyber security tasks.
It is a pure front-end tool for testing the security boundaries of large language models, helping researchers to find and fix potential security vulnerabilities and improve the security and reliability of AI systems.
This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms." ASSET achieves state-of-the-art reliability in detecting poisoned samples in end-to-end supervised learning/ self-supervised learning/ transfer learning.
An intentionally vulnerable AI chatbot to learn and practice AI Security.
CyberBrain is an advanced AI project designed specifically for training artificial intelligence models on devices with limited hardware capabilities.
This repository demonstrates a variety of **MCP Poisoning Attacks** affecting real-world AI agent workflows.
This repo contains reference implementations, tutorials, samples, and documentation for working with Bosch AIShield
A Jailbroken GenAI Model Can Cause Real Harm: GenAI-powered Applications are Vulnerable to PromptWares
Your agentic API security engineer. Built by the community, for builders who care about security but don't have unlimited time or budget. Point it at your API docs it hunts down the deep vulnerabilities that actually get you breached.
AI Red Team & Blue Team Tips & Tricks!
Preflight Security Scanner plugin for OpenClaw
LLM Security Project with Llama Guard
Zero Trust AI 360