[ICCV'19] Improving Adversarial Robustness via Guided Complement Entropy
-
Updated
Aug 2, 2019 - Python
[ICCV'19] Improving Adversarial Robustness via Guided Complement Entropy
An ASR (Automatic Speech Recognition) adversarial attack repository.
In the dynamic landscape of medical artificial intelligence, this study explores the vulnerabilities of the Pathology Language-Image Pretraining (PLIP) model, a Vision Language Foundation model, under targeted attacks like PGD adversarial attack.
Adversarially Training of Autoencoders for Unsupervised Anomaly Segmentation
WideResNet implementation on MNIST dataset. FGSM and PGD adversarial attacks on standard training, PGD adversarial training, and Feature Scattering adversarial training.
Evaluating CNN robustness against various adversarial attacks, including FGSM and PGD.
Adversarial Sample Generation
2022 Spring Semester, Personal Project Research
my MA thesis (code, paper & presentation) about adversarial out-of-distribution detection
An University Project for the AI4Cybersecurity class.
Learning Adversarial Robustness in Machine Learning both Theory and Practice.
Adversarially-robust Image Classifier
The Fast Gradient Sign Method (FGSM) combines a white box approach with a misclassification goal. It tricks a neural network model into making wrong predictions. We use this technique to anonymize images.
Add a description, image, and links to the pgd-adversarial-attacks topic page so that developers can more easily learn about it.
To associate your repository with the pgd-adversarial-attacks topic, visit your repo's landing page and select "manage topics."