This project made a mobile phone application that recommends Top 5 recipes that can be made with the ingredients by performing object detection on various vegetables. This Object Detection with YOLOv5 iOS sample app uses the PyTorch scripted YOLOv5 model to detect vegetable of the 9 classes. Cabbage, Carrot, Garlic, Onion, Pepper, Tomato, Potato, Zucchini
This project is a modified app that performs object detection among the ios demo apps built on the Pytorch Mobile platform. see here.
YOLO (You Only Look Once) is one of the fastest and most popular object detection models. YOLOv5 is an open-source implementation of the latest version of YOLO (for a quick test of loading YOLOv5 from PyTorch hub for inference, see here).
- PyTorch 1.7 or later (Optional)
- Python 3.8 (Optional)
- iOS Pytorch pod library 1.7
- Xcode 12 or later
To Test Run the Object Detection iOS App, follow the steps below:
If you don't have the PyTorch environment set up to run the script, you can download the model file.
wget https://github.com/kka-na/GGGS/releases/download/v1.3/yolov5s.torchscript.pt -p ObjectDetection/
wget https://github.com/kka-na/GGGS/releases/download/v1.3/classes.txt -p ObjectDetection/
Be aware that the downloadable model file was created with PyTorch 1.8.0, matching the iOS LibTorch library 1.8.0 specified in the Podfile
. If you use a different version of PyTorch to create your model by following the instructions below, make sure you specify the same iOS LibTorch version in the Podfile
to avoid possible errors caused by the version mismatch.
You have to create optimized torch model. Here is more detailed explanation see here.
If you ignore to create optimize pytorch model, still create a TorchScript model for mobile apps to use, but the inference on a non-optimized model can take twice as long as the inference on an optimized model - using the iOS app test images, the average inference time on an optimized and non-optimized model is 0.6 seconds and 1.18 seconds, respectively. See SCRIPT AND OPTIMIZE FOR MOBILE RECIPE for more details.
Finally, run the script below to generate the optimized TorchScript model and copy the generated model file yolov5s.torchscript.pt
to the GGGS/ObjectDetection
folder:
NOTE
that small sized version of the YOLOv5 model, which runs faster but with less accuracy, is generated by default when running the export.py
. You can also change the value of the weights
parameter in the export.py
to generate the medium, large, and extra large version of the model.
Run the commands below:
pod install
open ObjectDetection.xcworkspace/
Select an iOS simulator or device on Xcode to run the app. 🔫 When you run the app, the main screen and start button are displayed. 🔫 When the user presses the start button, real-time object detection is performed through the mobile phone camera, and the inference time is displayed at the bottom of the screen. 🔫 If you touch the bounding box of the detected object on the screen, a list of recipes that can be made with the material is displayed. 🔫 Click the recipe displayed in the list to see more details about the recipe. Finally, if you click the How to cook button, you will be taken to the recipe site.
see this video for whole project description and test running.