Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Yolov8 Himax LP #1405

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 18 additions & 14 deletions content/learning-paths/microcontrollers/yolo-on-himax/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,33 +3,37 @@ title: Run a Computer Vision Model on a Himax Microcontroller

minutes_to_complete: 90

who_is_this_for: This is an introduction topic for beginners on how to run a computervision application on an embedded device from Himax. This example uses an off-the-shelf Himax WiseEye2 module which is based on the Arm Cortex-M55 and Ethos-U55.
who_is_this_for: This is an introduction topic for beginners on how to run a computer vision application on an embedded device from Himax. This example uses an off-the-shelf Himax WiseEye2 module which is based on the Arm Cortex-M55 and Ethos-U55.

learning_objectives:
- Run a you-only-look-once (YOLO) object detection model on the edge device
- Build the Himax Software Development Kit (SDK) and generate the firmware image file
- Update the firmware on the edge device (Himax WiseEye2)

learning_objectives:
- Run a you-only-look-once (YOLO) computer vision model using off-the-shelf hardware based on the Arm Cortex-M55 and Ethos-U55.
- Learn how to build the Himax SDK and generate firmware image file.
- Learn how to update firmware on edge device (Himax WiseEye2).

prerequisites:
- Seeed Grove Vision AI V2 Module
- OV5647-62 Camera module and included FPC cable
- A [Seeed Grove Vision AI Module V2](https://www.seeedstudio.com/Grove-Vision-AI-Module-V2-p-5851.html) development board
- A [OV5647-62 Camera Module](https://www.seeedstudio.com/OV5647-69-1-FOV-Camera-module-for-Raspberry-Pi-3B-4B-p-5484.html) and included FPC cable
- A USB-C cable
- A Linux/Windows-based PC on an x86 archiecture.
- An x86 based Linux machine or a machine running Apple Silicon

author_primary: Chaodong Gong, Alex Su, Kieran Hejmadi

### Tags
skilllevels: Beginner
skilllevels: Introductory
subjects: ML
armips:
- Cortex M55
- Ethos U55
- Cortex-M55
- Ethos-U55
tools_software_languages:
- Himax SDK
- Bash
- Python
operatingsystems:
- Linux
- Windows
- macOS

#draft: true
#cascade:
# draft: true


### FIXED, DO NOT MODIFY
Expand Down
24 changes: 21 additions & 3 deletions content/learning-paths/microcontrollers/yolo-on-himax/_review.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,31 @@
review:
- questions:
question: >
The Grove Vision AI V2 Module can run Yolov8 model in real time?
The Grove Vision AI V2 Module can run YOLOv8 model in real time
answers:
- True
- False
correct_answer: 1
correct_answer: 1
explanation: >
The Grove Vision AI V2 Module can run object detection in real time using the Cortex-M55 and Ethos-U55.
The Grove Vision AI V2 Module can run object detection in real time thanks to it's ML accelerated capabilites.
question: >
Which of the options is the YOLO model unable to run?
answers:
- Pose detection
- Object detection
- Speech-to-text transcription
correct_answer: 3
explanation: >
The YOLO model is a computer vision model, meaning it runs based on images as input.
question: >
What Arm IP on the Seeed Grove Vision AI Module V2 enables you to run ML workloads efficiently?
answers:
- Ethos-U55
- Cortex-A72
- Cortex-X4
correct_answer: 1
explanation: >
When paired with the low-power Cortex-M55 processor, the Ethos-U55 provides an uplift in ML performance


# ================================================================================
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
---
title: Build the firmware
weight: 3

### FIXED, DO NOT MODIFY
layout: learningpathall
---

This section will walk you though the process of generating the firmware image file.

## Clone the Himax project

Himax has set up a repository containing a few examples for the Seeed Grove Vision AI V2 board. It contains third-party software and scripts to build and flash the image with the object detection application. By recursively cloning the Himax examples repo, git will include the necessary sub-repositories that have been configured for the project.

```bash
git clone --recursive https://github.com/HimaxWiseEyePlus/Seeed_Grove_Vision_AI_Module_V2.git
cd Seeed_Grove_Vision_AI_Module_V2
```

## Compile the firmware

For the object detection to activate, you need to edit the project's `makefile`, located in the `EPII_CM55M_APP_S` directory.

```bash
cd EPII_CM55M_APP_S
```

Use the `make` build tool to compile the source code. This should take up to 10 minutes depending on the number of CPU cores available on your host machine. The result is an `.elf` file written to the directory below.

```bash
make clean
make
```

## Generate the firmware image

Copy the `.elf` file to the `input_case1_secboot` directory.

```bash
cd ../we2_image_gen_local/
cp ../EPII_CM55M_APP_S/obj_epii_evb_icv30_bdv10/gnu_epii_evb_WLCSP65/EPII_CM55M_gnu_epii_evb_WLCSP65_s.elf input_case1_secboot/
```
The examples repository contains scripts to generate the image. Run the script corresponding to the OS of your host machine.

### Linux

```bash
./we2_local_image_gen project_case1_blp_wlcsp.json
```

### macOS
```console
./we2_local_image_gen_macOS_arm64 project_case1_blp_wlcsp.json
```

Your terminal output should end with the following.

```output
Output image: output_case1_sec_wlcsp/output.img
Output image: output_case1_sec_wlcsp/output.img

IMAGE GEN DONE
```

With this step, you are ready to flash the image onto the Himax development board.
122 changes: 122 additions & 0 deletions content/learning-paths/microcontrollers/yolo-on-himax/dev-env.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
---
title: Set up environment
weight: 2

### FIXED, DO NOT MODIFY
layout: learningpathall
---

# Set up the development environment

This learning path has been validated on Ubuntu 22.04 LTS and macOS.

{{% notice %}}
If you are running Windows on your host machine, you can use Ubuntu through Windows subsystem for Linux 2 (WSL2). Check out [this learning path](https://learn.arm.com/learning-paths/laptops-and-desktops/wsl2/setup/) to get started.
{{% /notice %}}

## Install Python, pip and git

You will use Python to build the firmware image and pip to install some dependencies. Verify Python is installed by running
```bash
python3 --version
```

You should see an output like the following.
```output
Python 3.12.7
```

Install `pip` and `venv` with the following commands.

```bash
sudo apt update
sudo apt install python3-pip python3-venv -y
```

check the output to verify `pip` is installed correctly.
```
pip3 --version
```

```output
pip 24.2 from /<path-to>/pip (python 3.12)
```

It is considered good practice to manage `pip` packages through a virtual environment. Create one with the steps below.

```bash
python3 -m venv $HOME/yolo-venv
source $HOME/yolo-venv/bin/activate
```

Your terminal displays `(yolo-venv)` in the prompt indicating the virtual environment is active.

You will need to have the git version control system installed. Run the command below to verify that git is installed on your system.

```bash
git --version
```

You should see output similar to that below.

```output
git version 2.39.3
```

## Install make

Install the make build tool, which is used to build the firmware in the next section.

### Linux

```bash
sudo apt update
sudo apt install make -y
```

### macOS

```console
brew install make
```

Successful installation of make will show the following when the `make --version` command is run.

```output
$ make --version
GNU Make 4.3
Built for x86_64-pc-linux-gnu
Copyright (C) 1988-2020 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
```
{{% notice Note %}}
To run this learning path on macOS, you need to verify that your installation is for the GNU Make - not the BSD version.
{{% /notice %}}
## Install Arm GNU toolchain

### Linux

The toolchain is used to cross-compile from the host architecture (x86) to the embedded device architecture (AArch64).

```bash
cd $HOME
wget https://developer.arm.com/-/media/Files/downloads/gnu/13.2.rel1/binrel/arm-gnu-toolchain-13.2.rel1-x86_64-arm-none-eabi.tar.xz
tar -xvf arm-gnu-toolchain-13.2.rel1-x86_64-arm-none-eabi.tar.xz
export PATH="$HOME/arm-gnu-toolchain-13.2.Rel1-x86_64-arm-none-eabi/bin/:$PATH"
```
### macOS
```console
cd $HOME
wget https://developer.arm.com/-/media/Files/downloads/gnu/13.3.rel1/binrel/arm-gnu-toolchain-13.3.rel1-darwin-arm64-arm-none-eabi.tar.xz
tar -xvf arm-gnu-toolchain-13.3.rel1-darwin-arm64-arm-none-eabi.tar.xz
export PATH="$HOME/arm-gnu-toolchain-13.3.rel1-darwin-arm64-arm-none-eabi/bin/:$PATH"
```

{{% notice %}}
You can add the above command to the `.bashrc` file. This was, the Arm GNU toolchain is configured from new terminal sessions as well.
{{% /notice %}}


Now that your development environment is set up, move on to the next section where you will generate the firmware image.
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
---
title: Flash firmware onto the microcontroller
weight: 4

### FIXED, DO NOT MODIFY
layout: learningpathall
---

Now that you have generated an image file on the local host machine, you are ready to flash the microcontroller with this firmware.

## Install xmodem

`Xmodem` is a basic file transfer protocol which is easily installed using the Himax examples repository. Run the following command to install the dependency. If you cloned the repository to a different location, replace $HOME with the path.

```bash
cd $HOME/Seeed_Grove_Vision_AI_Module_V2
pip install -r xmodem/requirements.txt
```

## Connect the module

Insert the Flexible printed circuit (FPC) into the Grove Vision AI V2 module. Lift the dark grey latch on the connector as per the image below.

![unlatched](./unlatched.jpg)

Then, slide the FPC connector in with the metal pins facing down and close the dark grey latch to fasten the connector.

![latched](./latched.jpg)

Then connect the Groove Vision AI V2 Module to your computer via the USB-C cable.

{{% notice Note %}}
The development board may have two USB-C connectors. If you are running into issues connecting the board in the next step, make sure you are using the right one.
{{% /notice %}}

## Find the COM port

You'll need to provide the communication port (COM) which the board is connected to in order to flash the image. There are commands to list all COMs available on your machine. Once your board is connected through USB, it'll show up in this list. The COM identifier will start with **tty**, which may help you determine which one it is. You can run the command before and after plugging in the board if you are unsure.

### Linux

```bash
sudo grep -i 'tty' /var/log/dmesg
```

### MacOS

```console
ls /dev/tty.*
```

{{% notice Note %}}
If the port seems unavailable, try changing the permissions temporarily using the `chmod` command. Be sure to reset them afterwards, as this may pose a computer security vulnerability.

```bash
chmod 0777 <COM port>
```
{{% /notice %}}

The full path to the port is needed in the next step, so be sure to note it down.

## Flash the firmware onto the module

Run the python script below to flash the firmware.

```bash
python xmodem\xmodem_send.py --port=<COM port> --baudrate=921600 --protocol=xmodem --file=we2_image_gen_local\output_case1_sec_wlcsp\output.img
```

{{% notice Note %}}
When you run other example models demonstrated in the later section [Object detection and additional models](/learning-paths/microcontrollers/yolo-on-himax/how-to-5/), you need to adapt this command with the right image file.
{{% /notice %}}

After the firmware image burning is completed, the message `Do you want to end file transmission and reboot system? (y)` is displayed. Press the reset button indicated in the image below.

![reset button](./reset_button.jpg)

## Run the model

After the reset button is pressed, the board will start inference with the object detection automatically. Observe the output in the terminal to verify that the image is built correctly. If a person is in front of the camera, you should see the `person_score` value go over `100`.

```output
b'SENSORDPLIB_STATUS_XDMA_FRAME_READY 240'
b'write frame result 0, data size=15284,addr=0x340e04e0'
b'invoke pass'
b'person_score:113'
b'EVT event = 10'
b'SENSORDPLIB_STATUS_XDMA_FRAME_READY 241'
b'write frame result 0, data size=15296,addr=0x340e04e0'
b'invoke pass'
b'person_score:112'
b'EVT event = 10'
```

This means the image works correctly on the device, and the end-to-end flow is complete.
Loading