Repository to host GStreamer based Edge AI applications for TI devices
This repo adds a vision based defect detection support.
- About Defect Detection Demo
- Supported Devices
- EVM Setup
- Demo Setup and Running
- Result
- How It's Made
- Resources
This is a demo using AM62A to run a vision based artificial intelligent model for defect detection for manufacturing applications. The model tests the produced units as they move on a conveyer belt, to recognized the accepted and the defected units. The demo is equipped with an object tracker to provide accurate coordinates of the units for sorting and filtering process. A live video is displayed on the screen with green and red boxes overlaid on the accepted and defected units respectively. The screen also includes a graphical dashboard showing live statistics about total products, defect percentage, production rate, and a histogram of the types of defect. This demo is built based on the edgeai-gst-apps. It runs on Python only for this current version.
This demo runs a custom trained YOLOX-nano neural network1 on the AM62A and performs object detection on imagery to find dfd
See Resources for links to AM62A and other Edge AI training material.
DEVICE | Supported |
---|---|
AM62A | ✔️ |
Follow the AM62A Quick Start guide for the AM62A Starter Kit
- Download the Edge AI SDK from ti.com.
- Ensure that the tisdk-edgeai-image-am62axx.wic.xz is being used.
- Install the SDK onto an SD card using a tool like Balena Etcher.
- Connect to the device (EVM) and login using a UART connection or a network connection through an SSH session.
-
Clone this repo in your target under /opt
root@am62axx-evm:/opt# git clone https://github.com/TexasInstruments/edgeai-gst-apps-defect-detection root@am62axx-evm:/opt# cd edgeai-gst-apps-defect-detection
-
Run the setup script below within this repository on the EVM. This requires an internet connection on the EVM.
-
An ethernet connection is recommended.
-
Proxy settings for HTTPS_PROXY may be required if the EVM is behind a firewall.
root@am62axx-evm:/opt/edgeai-gst-apps-defect-detection# ./setup-defect-detection.sh
This script will download the following:
- A pre-trained defect detection model based on yolox-nano-lite1 and install it under /opt/model_zoo in the filesystem.
- The test video to run the demo without the need to a camera.
-
Run commands as follows from the base directory of this repo on the EVM.
root@am62axx-evm:/opt/edgeai-gst-apps-defect-detection# cd apps_python
- To run the demo using the pre-recorded test video as input:
root@am62axx-evm:/opt/edgeai-gst-apps-defect-detection/apps_python# ./app_edgeai.py ../configs/defect_detection_test_video.yaml
- To run the demo using a CSI camera as input:
root@am62axx-evm:/opt/edgeai-gst-apps-defect-detection/apps_python# ./app_edgeai.py ../configs/defect_detection_camera.yaml
The application shows two main sections on the screen: live feed of the input video and a graphical dashboard. The live video is overlaid boxes on the detected objects. The green boxes represent accepted (good) objects while the defected objects are overlaid with various shades of red to distinguish their defect types. The dashboard graphically shows an overview of the whole production performance including the total produced units since start of operation, the percentage of the defected units, and the production rate as units per hour. The dashboard also shows a histogram detailing the types of detected defects.
The demo is built by custom training YOLOX-nano model. Four classes are used to train the model: Good (accepted) and three classes of defects including Half Ring, No Plastic, No Ring. The figure shows examples of pictures from the four classes. The pictures in the figure are cropped for clarity purposes.
100 pictures were taken for each class (total 400 pictures) in one orientation while changing the lighting condition of each picture. The camera is positioned at a height that is approximate to the height expected in the actual demo setup. The pictures are captured with a resolution of 720x720. The following figure shows samples of the pictures captured for the good class.
Data augmentation is used expand the collected dataset. Two geometrical augmentation methods are applied flip right-left and rotation. First flipped copies are created for each picture which brings the total number of pictures to 400x2=800. Then five rotated copies of each picture are created which brings the total number of pictures up to 800+800x5=4800 pictures. The rotation angle is randomly selected for each picture. The following figure shows the augmentation process with an example. The pictures in the figure are cropped show the changes.
The model is trained using TI Edge AI Studio Model Composer, an online application which provided a full suite of tools required for edge ai applications including data capturing, labeling, training, compilation and deployment. Follow this Quick Start Guide for a detailed tutorial about using the Model Composer.
The labeled dataset with the 4800 pictures is compressed as a tar file and uploaded to the model composer. The model composer divides the dataset into three parts for training, testing and validation. The yolox-nano-lite model is selected in the training tab with the following parameters:
- Epochs: 10
- Learning Rate: 0.002
- Batch size: 8
- Weight decay: 0.0001
The model achieved 100% accuracy on the training.
The model is then compiled using the default preset parameters in the model composer:
- Calibration Frames: 10
- Calibration Iterations: 10
- Detection Threshold: 0.6
- Detection Top K: 200
- Sensor Bits: 8
This step generated the required artifacts which is downloaded to the AM62A EVM. These artifacts are used to offload the model to the deep accelerator at inference.
The object tracker is used to provide accurate coordinates of the units detected in the frame. This information is used to count the total number of units and the number of units for each class. More important, the coordinates produced by the object tracker can be fed to the sorting and filtering mechanism in the production line. The object tracker code is contained in the object_tracker.py file.
The dashboard graphically shows and over view of the performance of the whole manufacturing system including the total number of units, the percentage of the defected units, and the rate of production in units per hour. It also shows a histogram of the types of defects. Such information is useful to analyze the manufacturing system and select the most common types of defects. The dashboard code is contained in its own class which is saved in the dashboard.py file. A new class is added to the post_process.py to control all post process work related to the defect detection demo including calling the object tracker, performance statistics calculation, calling dashboard generator, and draw bounding boxes.
- apps_python:
- Add a new post process class for defect detection in post_process.py.
- Add a new dashboard class in dashboard.py to generate graphical representation of the systems performance.
- Add a new detectObject and objectTracker classes in object_tracker.py to track units detected in the frame.
- apps_cpp: Not changed in this version
- configs: Create two new config files:
- /configs/defect_detection_test_video.yaml to run the demo using a pre-recorded video as input.
- /configs/defect_detection_camera.yaml to run the demo with a CSI or a USB camera feed as input.