A Toolbox for Adversarial Robustness Research
-
Updated
Sep 14, 2023 - Jupyter Notebook
A Toolbox for Adversarial Robustness Research
Implementation of Papers on Adversarial Examples
[CIKM 2022] Self-supervision Meets Adversarial Perturbation: A Novel Framework for Anomaly Detection (PyTorch)
Code of our recently published attack FDA: Feature Disruptive Attack. Colab Notebook: https://colab.research.google.com/drive/1WhkKCrzFq5b7SNrbLUfdLVo5-WK5mLJh
PyTorch implementation of Targeted Adversarial Perturbations for Monocular Depth Predictions (in NeurIPS 2020)
PyTorch implementation of Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations (in AAAI 2021)
CAMERO: Consistency Regularized Ensemble of Perturbed Language Models with Weight Sharing (ACL 2022)
Course Project for EE782. IIT Bombay, Autumn 2019
Pytorch implementation of https://github.com/val-iisc/nag
Adversarial Attacks and Defenses via Image perturbations
Repository for final project of Data Mining Course
School AI semester project
Add a description, image, and links to the adversarial-perturbations topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-perturbations topic, visit your repo's landing page and select "manage topics."