551 results for “topic:adversarial-machine-learning”
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Fawkes, privacy preserving tool against facial recognition systems. More info at https://sandlab.cs.uchicago.edu/fawkes
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
The Security Toolkit for LLM Interactions
A Toolbox for Adversarial Robustness Research
A curated list of useful resources that cover Offensive AI.
A curated list of adversarial attacks and defenses papers on graph-structured data.
RobustBench: a standardized adversarial robustness benchmark [NeurIPS 2021 Benchmarks and Datasets Track]
Papers and resources related to the security and privacy of LLMs 🤖
T2F: text to face generation using Deep Learning
Unofficial PyTorch implementation of the paper titled "Progressive growing of GANs for improved Quality, Stability, and Variation"
A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.
GraphGallery is a gallery for benchmarking Graph Neural Networks
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
TransferAttack is a pytorch framework to boost the adversarial transferability for image classification.
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
Provable adversarial robustness at ImageNet scale
A curated list of trustworthy deep learning papers. Daily updating...
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.
auto_LiRPA: An Automatic Linear Relaxation based Perturbation Analysis Library for Neural Networks and General Computational Graphs
💡 Adversarial attacks on explanations and how to defend them
A list of recent papers about adversarial learning
Create adversarial attacks against machine learning Windows malware detectors
Code for our NeurIPS 2019 *spotlight* "Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers"
A curated list of papers on adversarial machine learning (adversarial examples and defense methods).
CTF challenges designed and implemented in machine learning applications
A guided mutation-based fuzzer for ML-based Web Application Firewalls
A Python library for Secure and Explainable Machine Learning
Radio Frequency Machine Learning with PyTorch