Please watch the IEEE VR'22 presentation for a quick introduction of our work.
PyTorch's implementation of SPAA (paper). Please refer to supplementary material (~66MB) for more results.
- PyTorch compatible GPU with CUDA 11.7
- Conda (Python 3.9)
- Other packages are listed in requirements.txt.
- Create a new conda environment:
conda create --name spaa python=3.9 activate spaa # Windows conda activate spaa # Linux
- Clone this repo:
git clone https://github.com/BingyaoHuang/SPAA cd SPAA
- Install required packages by typing
pip install -r requirements.txt
- Download SPAA benchmark dataset (~3.25 GB) and extract to
data/
, seedata/README.md
for more details. - Start visdom by typing the following command in local or server command line:
visdom -port 8097
- Once visdom is successfully started, visit
http://localhost:8097
(train locally) orhttp://server:8097
(train remotely). - Open
reproduce_paper_results.py
and set which GPUs to use. An example is shown below, we use GPU 0.os.environ['CUDA_VISIBLE_DEVICES'] = '0'
- Run
reproduce_paper_results.py
to reproduce benchmark results. To visualize the training process in visdom (slower), you need to setplot_on=True
.cd src/python python reproduce_paper_results.py
- Finish the steps above.
- Open
main.py
and follow the instructions there. Execute each cell (starting with# %%
) one-by-one (e.g., use PyCharm Execute cell in console) to learn how to set up your projector-camera systems, capture data, train PCNet/CompenNet++, perform the three projector-based attacks (i.e., SPAA/PerC-AL+CompenNet++/One-pixel_DE), and generate the attack results. - The results will be saved to
data/setups/[your setup]/ret
- Training results of PCNet/CompenNet++ will also be saved to
log/%Y-%m-%d_%H_%M_%S.txt
andlog/%Y-%m-%d_%H_%M_%S.xls
.
If you use the dataset or this code, please consider citing our work
@inproceedings{huang2022spaa,
title = {SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image Classifiers},
booktitle = {2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
author = {Huang, Bingyao and Ling, Haibin},
year = {2022},
month = mar,
pages = {534--542},
publisher = {IEEE},
address = {Christchurch, New Zealand},
doi = {10.1109/VR51125.2022.00073},
isbn = {978-1-66549-617-9}
}
- This code borrows heavily from
- CompenNet and CompenNet++ for PCNet/CompenNet++.
- Hyperparticle/one-pixel-attack-keras for One-pixel_DE attacker.
- ZhengyuZhao/PerC-Adversarial for SPAA, PerC-AL+CompenNet++ attackers and differentiable CIE deltaE 2000 metric.
- cheind/py-thin-plate-spline for
pytorch_tps.py
. - Po-Hsun-Su/pytorch-ssim for PyTorch implementation of SSIM loss.
- We thank the anonymous reviewers for valuable and inspiring comments and suggestions.
- We thank the authors of the colorful textured sampling images.
- Feel free to open an issue if you have any questions/suggestions/concerns 😁.
This software is freely available for non-profit non-commercial use, and may be redistributed under the conditions in license.