The training procedure can be done using data in LMDB format. To launch training or evaluation at the WiderFace dataset, download it from the source, extract images and annotations into <DATA_DIR> folder and use the provided scripts to convert original annotations to LMDB format.
To create LMDB files go to the '$CAFFE_ROOT/python/lmdb_utils/' directory and run the following scripts:
- Run docker in interactive session with mounted directory with WIDER dataset
nvidia-docker run --rm -it --user=$(id -u) -v <DATA_DIR>:/data ttcf bash
- Convert original annotation to xml format for both train and val subsets:
python3 $CAFFE_ROOT/python/lmdb_utils/wider_to_xml.py /data /data/WIDER_train/images/ /data/wider_face_split/wider_face_train_bbx_gt.txt train
python3 $CAFFE_ROOT/python/lmdb_utils/wider_to_xml.py /data /data/WIDER_val/images/ /data/wider_face_split/wider_face_val_bbx_gt.txt val
- Convert xml annotations to set of xml files per image:
python3 $CAFFE_ROOT/python/lmdb_utils/xml_to_ssd.py --ssd_path /data --xml_path_train /data/wider_train.xml --xml_path_val /data/wider_val.xml
- Run bash script to create LMDB:
bash $CAFFE_ROOT/python/lmdb_utils/create_wider_lmdb.sh
- Close docker session by
ctrl+D
and check that you have lmdb files in <DATA_DIR>.
On next stage we should train the Face Detection model. To do this follow next steps:
cd ./models
python3 train.py --model face_detection \ # name of model
--weights face-detection-retail-0044.caffemodel \ # initialize weights from 'init_weights' directory
--data_dir <DATA_DIR> \ # path to directory with dataset
--work_dir <WORK_DIR> \ # directory to collect file from training process
--gpu <GPU_ID>
To evaluate the quality of trained Face Detection model on your test data you can use provided scripts.
python3 evaluate.py --type fd \
--dir <WORK_DIR>/face_detection/<EXPERIMENT_NUM> \
--data_dir <DATA_DIR> \
--annotation wider_val.xml \
--iter <ITERATION_NUM>
python3 mo_convert.py --name face_detection \
--dir <WORK_DIR>/face_detection/<EXPERIMENT_NUM> \
--iter <ITERATION_NUM> \
--data_type FP32
You can use this demo to view how resulting model performs.