288 results for “topic:adversarial-examples”
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.
A Toolbox for Adversarial Robustness Research
A pytorch adversarial library for attack and defense methods on images and graphs
Raising the Cost of Malicious AI-Powered Image Editing
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
🗣️ Tool to generate adversarial text examples and test machine learning models against them
Implementation of Papers on Adversarial Examples
Adversarial attacks and defenses on Graph Neural Networks.
alpha-beta-CROWN: An Efficient, Scalable and GPU Accelerated Neural Network Verifier (winner of VNN-COMP 2021, 2022, 2023, 2024, 2025)
auto_LiRPA: An Automatic Linear Relaxation based Perturbation Analysis Library for Neural Networks and General Computational Graphs
💡 Adversarial attacks on explanations and how to defend them
A list of recent papers about adversarial learning
A curated list of awesome resources for adversarial examples in deep learning
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models (published in ICLR2018)
DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model
PhD/MSc course on Machine Learning Security (Univ. Cagliari)
A curated list of papers on adversarial machine learning (adversarial examples and defense methods).
Official TensorFlow Implementation of Adversarial Training for Free! which trains robust models at no extra cost compared to natural training.
A curated list of academic events on AI Security & Privacy
Physical adversarial attack for fooling the Faster R-CNN object detector
Library containing PyTorch implementations of various adversarial attacks and resources
PyTorch library for adversarial attack and training
Revisiting Transferable Adversarial Images (TPAMI 2025)
This repository contains the implementation of three adversarial example attack methods FGSM, IFGSM, MI-FGSM and one Distillation as defense against all attacks using MNIST dataset.
[CVPR 2020] When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks
对抗样本(Adversarial Examples)和投毒攻击(Poisoning Attacks)相关资料
Purple-team telemetry & simulation toolkit.