This project contains examples of transfer learning using convolutional neural networks trained on the Imagenet dataset and repurposed to identify objects from the Caltech256 dataset. Various open-source DL frameworks are used and leverage GPU acceleration to perform this task. The example here is exactly the same as the exercise described in this blog post from the Cloudera Engineering blog.
The same data is used across all frameworks. Prepare this data initially by executing the code in dataprep.py.
For mxnet-gluon: Also execute the code in mxnet-gluon/dataprep.py
to
put images into the MXNet binary format RecordIO.
For any particular framework, run the code in the featurize.py
file, but be
sure to change the number of GPUs used depending on your setup.
Execute the code in train.py
to load the featurized images and train a classifier
on the features created in the previous step.
By default, train.py
simply trains a softmax layer on top of the VGG16 classifier,
which outputs 257 probabilities. Changing the architecture can help eek out performance:
adding BatchNorm, training deeper layers, adding dropout, etc.
Some of the code was adapted from the following examples: