Skip to content

Latest commit

 

History

History
110 lines (70 loc) · 4.54 KB

File metadata and controls

110 lines (70 loc) · 4.54 KB

Object-Classification-and-Localization-with-TensorFlow

This is a multiclass image classification & localization project for SINGLE object using CNN's and TensorFlow on Python3.

Dependencies

pip3 install requirements.txt

Training (GPU)

Cloning the repository to local machine:

git clone https://github.com/MuhammedBuyukkinaci/Object-Classification-and-Localization-with-TensorFlow

Changing directory to this folder

cd Object-Classification-and-Localization-with-TensorFlow

1 ) Augmenting data:

python3 create_training_data.py

2 ) Training the CNN:

python3 train.py

3 ) Testing on unseen data:

python3 test.py

Training on CPU

I trained on a GTX 1050. 1 epoch lasted 10 seconds approximately.

If you are using CPU, which I do not recommend, change the lines below in train.py:

config = tf.ConfigProto(allow_soft_placement=True)
config.gpu_options.allow_growth = True
config.gpu_options.allocator_type = 'BFC'
with tf.Session(config=config) as sess:

to

with tf.Session() as sess:

Data

3 categories: Cucumber, eggplant and mushroom. 188 images from 3 categories were used in this project. Images used in this project are in training_images folder. You can also download them from here.

Steps

1 ) Collecting images via Google Image Download. Only one object must be in the image. After collecting images, you must resize them to in order to be able to label.

2 ) Labeling images via LabelImg.

3 ) Data Augmentation (create_training_data.py). Mirroring with respect to x axis, mirroring with respect to y axis and adding noise were carried out. Hereby, data amount is 8-fold.

4 ) After data augmentation, create_training_data.py script is creating suitable xml files for augmented images(in order not to label all augmented labels).

5 ) Making our data tabular. Input is image that we feed into CNN. Output1 is one hot encoded classification output. Output2 is the locations of bounding boxes(regression) in create_training_data.py.

6 ) Determining hypermaraters in train.py.

7 ) Separating labelled data as train and CV in train.py.

8 ) Defining our architecture in train.py. I used AlexNet for model architecture.

9 ) Creating 2 heads for calculating loss in train.py. One head is classification loss. The other head is regression loss.

10 ) Training the CNN on a GPU (GTX 1050 - One epoch lasted 10 seconds approximately)

11 ) Testing on unseen data (testing_images folder) collected from the Internet(in test.py).

Architecture

AlexNet is used as architecture. 5 convolution layers and 3 Fully Connected Layers with 0.5 Dropout Ratio. 60 million Parameters. alt text

Predictions