Xiangjie Sui*, Hanwei Zhu*, Xuelin Liu, Yuming Fang, Shiqi Wang, and Zhou Wang
- Enviorment
# Pytorch 2.0.1+cu117 & CUDA 11.7 conda env create -f requirements.yaml
- Datasets
We test three public datasets CVIQ, OIQA, JUFE. We organize the files as:-imgs -img1 -img2 -... -mos.pkl # a set of hash indexes: img_name -> mos
- Pre-trained Models
We test three 3D backbone: Swin-T, ConvNet, and Xclip. The scanpath generator is derived from our CVPR 2023 paper (Paper & Code) You can download these pre-trained models from corresponding authors. Alternatively, you can also download them from the sources we provided [Google Drive].cd ./model ls convnext_tiny_1k_224_ema.pth, swin_tiny_patch244_window877_kinetics400_1k.pth, k400_32_16.pth scandmm-seed-1238.pkl
- Runing Command
# JUFE
python -u train.py --db='./Dataset/JUFE' --nw=3 --backbone='xclip' --dbsd=1234 --bs=16 --lr=8e-6
# CVIQ
python -u train.py --db='./Dataset/CVIQ' --nw=3 --backbone='xclip' --dbsd=1234 --bs=16 --lr=8e-7
# OIQA
python -u train.py --db='./Dataset/OIQA' --nw=3 --backbone='xclip' --dbsd=1234 --bs=8 --lr=8e-7
- Check the Checkpoint and Final Model
cd ./checkpoints
ls
JUFE-X-seed-1238.pth
cd ./model
ls
JUFE-X-seed-1238
We provided some pre-trained models [here].
cd ./model
ls
scandmm-seed-1238.pkl
CVIQ-X-seed-1238,
OIQA-X-seed-1238,
JUFE-X-seed-1238
# test
python -u train.py --test=True --cp=True --db='./Dataset/JUFE' --backbone='xclip' --dbsd=1238
@article{gsr2023,
title={Perceptual Quality Assessment of 360° Images Based on Generative Scanpath Representation},
author={Xiangjie Sui and Hanwei Zhu and Xuelin Liu and Yuming Fang and Shiqi Wang and Zhou Wang},
year={2023},
eprint={2309.03472},
archivePrefix={arXiv},
primaryClass={cs.CV}
}