40 results for “topic:data-poisoning”
A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.
A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)
A curated list of academic events on AI Security & Privacy
APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)
[ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
[NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training
No description provided.
🤯 AI Security EXPOSED! Live Demos Showing Hidden Risks of 🤖 Agentic AI Flows: 💉Prompt Injection, ☣️ Data Poisoning. Watch the recorded session:
The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on poisoned dataset.
Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression
How Robust are Randomized Smoothing based Defenses to Data Poisoning? (CVPR 2021)
Experiments on Data Poisoning Regression Learning
MIT IEEE URTC 2023. GSET 2023. Repository for "SeBRUS: Mitigating Data Poisoning in Crowdsourced Datasets with Blockchain". Using Ethereum smart contracts to stop AI security attacks on crowdsourced datasets.
CCS'22 Paper: "Identifying a Training-Set Attack’s Target Using Renormalized Influence Estimation"
Measure and Boost Backdoor Robustness
Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning. (Neurips 2021)
Code for the paper Analysis and Detectability of Offline Data Poisoning Attacks on Linear Systems.
Analyzing Adversarial Bias and the Robustness of Fair Machine Learning
A backdoor attack in a Federated learning setting using the FATE framework
[NeurIPS 2022] Can Adversarial Training Be Manipulated By Non-Robust Features?
A research framework for implementing and evaluating poisoning attacks on Retrieval-Augmented Generation (RAG) systems, enabling the study of their security vulnerabilities.
TOAN is a toolkit designed to simplify the generation of poisoned datasets for machine learning robustness research.
Project for the Cybersecurity course 2025/2026
White-paper & talk covering benefits, risks, and mitigation frameworks for AI and LLMs in cybersecurity (NIST AI RMF, OWASP Top 10 for LLMs, MITRE ATLAS, real-world case studies)
federated learning framework built with Flower and PyTorch to evaluate the robustness of FL systems under data poisoning attacks.
The first anti-AI tarpit. Rewritten in Python. Traps LLM crawlers in an infinite maze of fake pages and Markov babble.
🤖 AI/ML poisoning attack research | Adversarial machine learning | NullSec Framework | @AnonAntics
Testing adversarial ML attacks (data poisoning, targeted misclassification, and model extraction) and discussing defensive tradeoffs that exist for real deployments.
🛡️ PROACT: PROjection and Activation Constrained Training for poisoning-resilient continual learning
A system-level analysis of how Retrieval-Augmented Generation (RAG) pipelines break — and how failures propagate.