Code for ML Doctor
-
Updated
Aug 14, 2024 - Python
Code for ML Doctor
Code for "CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples" (NDSS 2020)
Implementations on Security and Privacy in ML; Evasion Attack, Model Stealing, Model Poisoning, Membership Inference Attacks, ...
An implementation to apply ActiveThief to steal cloud models.
Repository for my Bachelor Thesis at Karlsruhe Institute of Technology.
Add a description, image, and links to the model-stealing topic page so that developers can more easily learn about it.
To associate your repository with the model-stealing topic, visit your repo's landing page and select "manage topics."