An application for money classification.
Now it is trained to classify the following countries banknotes:
- Russia.
- USA.
- EU.
- China.
- Kazakhstan.
To install the app, you should do the following: 0. If you want, create a new venv.
- Download following python libs:
- tensorflow
- numpy
- jupyter notebook
- imgaug
- matplotlib
- PIL(Pillow)
- scipy
- ipywidgets
- Clone this repository.
There are 4 pre-trained models, second one doesnt work at all.
Example of usage is shown in load_trained.ipynb
.
- Load models 1-4 from
training/*
withtf.keras.models.load_model
. - Read an image as a numpy array.
- Convert the image to grayscale, set it's type to np.float32 and make sure it every pixel is is in range 0-1(just divide picture pixels by 255).
- Predict the values with
model.predict(pic)
. - With
values[np.argmax(predicted_val)]
get the predicted class of the banknote.
As the aim was to make the neural networks do as much as it can, it should itself deal with noises, blurs, even with non-cropped or rotated pics. To make it possible, we should train the NN on noised, blured, rotated etc pictures. As it is a burden to collect dataset with many variants of rotations, noises etc, only a perfect primary dataset is collected. The training(or secondary) dataset is created with artificial addition of all the imperfections.
First thing to do was to collect primary dataset. The primary dataset has to be perfect: no blur, no noise, not rotated, cropped.
Generation of secondary dataset is based on the primary dataset.
The secondary dataset generation was done with imgaug.
The following augmentation operations were used at the generation of secondary dataset:
- Noise generation. 3 types of noise: Gaussian, Poisson, SaltAndPepper. Each one of them was applied with 50 percent probability.
- Blur. 2 types: Gaussian and Motion. One of them was applied with randomized parameters.
- Affine transformations. Small rescales, moving, rotation were done to each picture.
The examples of the augmentations can be seen on the following pic:
4 models of CNNs were created with different amounts of convolution kernels and
layers, with or without usage of batch normalisation. For exact architecture you can
see neural_network_fit.ipynb
Results of 1st and 3d models are similarly good, 4th is a bit less accurate, 2nd model doesn't work at all.