iOS application for Fruit Image Classifier
Binary/ Multiclass image classification of fruits. The trained model can determine whether a fruit is fresh / rotten from the given image. The app can use the camera to take images or, can load an image from the images stored on the device.
- Apple(fresh + rotten)
- Mango(fresh + rotten)
- Orange(fresh + rotten)
- Banana(fresh + rotten)
Real : Fresh, Predicted : Fresh | Real : Fresh, Predicted: Fresh | Real : Fresh, Predicted: Fresh |
---|---|---|
Real : Fresh, Predicted : Fresh | Real : Rotten, Predicted : Rotten | Real : Fresh, Predicted : Fresh |
---|---|---|
Real : Fresh, Predicted : Rotten | Real : Fresh, Predicted : Rotten |
---|---|
- python 2.7+
- Anaconda
- Turi Create
- Detailed package info in the requirements.txt file
- Swift 4
- iOS 11.2 SDK / iOS 11.1 SDK
- Xcode 9.2 / 9.1
- macOS 10.13.1+
- Any device running iOS 11.1+
Details regarding model evaluation (accuracy, confusion matrix) can be found here as iPython Notebook
Clone the repo or download as a zip
- If you don't want to train on a new dataset and want to use what's already available, skip to next part.
- Save your data (images to train on) inside the folder
image_data
- Loading dataset into a dataframe
python data_to_df.py
- Create model and train
python predict.py
- It'll create a model and mlmodel to use with the iOS application.
- Create a Single view iOS app in Xcode with camera view.
- Add the coreml model you got from training or if you skipped training, download the mlmodel from release section of the repo. link
- Initiate the model inside your swift code and run inference on images captured or in the way you have defined image input on your app.
- If you wish to use the existing app, just import the mlmodel into project directory and build.
- Best if you can test on a device instead of an emulator.