11 results for “topic:model-stealing”
Code for ML Doctor
Code for "CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples" (NDSS 2020)
Official Source Code of "Exploring Effective Data for Surrogate Training Towards Black-box Attack" and "STDatav2: Accessing Efficient Black-Box Stealing for Adversarial Attacks".
Implementations on Security and Privacy in ML; Evasion Attack, Model Stealing, Model Poisoning, Membership Inference Attacks, ...
Official implementation of "Stealthy Imitation: Reward-guided Environment-free Policy Stealing" (ICML 2024)
An implementation to apply ActiveThief to steal cloud models.
Official implementation of "Stealix: Model Stealing via Prompt Evolution" (ICML 2025)
Official implementation of "Medical Multimodal Model Stealing Attacks via Adversarial Domain Alignment" (AAAI-2025 oral)
Testing adversarial ML attacks (data poisoning, targeted misclassification, and model extraction) and discussing defensive tradeoffs that exist for real deployments.
Repository for my Bachelor Thesis at Karlsruhe Institute of Technology.
An advanced, interactive educational platform focused on AI system vulnerabilities, attack vectors, and offensive security methodologies. [Prompt Injection, Model Evasion, Data Poisoning, Agent Hijacking]