42 results for “topic:machine-learning-security”
A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.
Simple pytorch implementation of FGSM and I-FGSM
pretrained BERT model for cyber security text, learned CyberSecurity Knowledge
CTF challenges designed and implemented in machine learning applications
Code for our USENIX Security 2021 paper -- CADE: Detecting and Explaining Concept Drift Samples for Security Applications
Reading list for adversarial perspective and robustness in deep reinforcement learning.
Adversarial Machine Learning (AML) Capture the Flag (CTF)
Train AI (Keras + Tensorflow) to defend apps with Django REST Framework + Celery + Swagger + JWT - deploys to Kubernetes and OpenShift Container Platform
AI SBOM: AI Software Bill of Materials - The Supply Chain for Artificial Intelligence
The Anti-Virus for AI Artifacts & RAG Firewall. A static analysis tool scanning Models and Notebooks for RCE, Datasets and RAG docs for Data Poisoning, PII, and Prompt Injections. Secure your AI Supply Chain.
Do you want to learn AI Security but don't know where to start ? Take a look at this map.
Hands-on lessons for attacking and defending AI systems, starting with the OWASP Top 10 for LLM Applications.
Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers' Outputs (ACM CCS'21)
Datasets for training deep neural networks to defend software applications
Test and evaluate Large Language Models against prompt injections, jailbreaks, and adversarial attacks with a web-based interactive lab.
Summary of the presentation on Real and Stealthy Attacks on State-of-the-Art Face Recognition Systems at the Seminar: Machine Learning in Cyber-security at FU Berlin
Systematic Security Evaluation Framework for AI Coding Assistants - Detection of prompt injection vulnerabilities
Educational research demonstrating weight manipulation attacks in SafeTensors models. Proves format validation alone is insufficient for AI model security.
Understanding Adversarial Attacks Through MNIST
Build an AI Security Analyst Assistant with RAG! LEARN FROM SCRATCH
Final Year Thesis Project (COMP4981H) for Computer Science Students in HKUST
Awesome-DL-Security-and-Privacy-Papers
Adversarial perturbation intensity strategy achieving chosen intra-technique transferability level for logistic regression
A stochastic input pre-processing technique based on a process of down-sampling/up-sampling using convolution and transposed convolution layers. Defending convolutional neural network against adversarial attacks.
Security Vulnerabilities and Defensive Mechanisms in CLI/Terminal-Based Large Language Model Deployments - A Comprehensive Research Synthesis (Technical Report, November 2025)
DeepProv: Behavioral Characterization and Repair of Neural Networks via Inference Provenance Graph Analysis
No description provided.
Detect and defend against adversarial attacks on ML models
CISPA European Championship 2026 — 3-task ML security hackathon: black-box image reconstruction, generative model fingerprinting, and hardware-dependent chimera generation.
This is the repository for Homeworks of COMP 530 Data Privacy and Security course given by Emre Gursoy at Koc University.