diff --git a/content/learning-paths/microcontrollers/yolo-on-himax/_index.md b/content/learning-paths/microcontrollers/yolo-on-himax/_index.md index 59d523d0b..1dcf71ef7 100644 --- a/content/learning-paths/microcontrollers/yolo-on-himax/_index.md +++ b/content/learning-paths/microcontrollers/yolo-on-himax/_index.md @@ -3,33 +3,37 @@ title: Run a Computer Vision Model on a Himax Microcontroller minutes_to_complete: 90 -who_is_this_for: This is an introduction topic for beginners on how to run a computervision application on an embedded device from Himax. This example uses an off-the-shelf Himax WiseEye2 module which is based on the Arm Cortex-M55 and Ethos-U55. +who_is_this_for: This is an introduction topic for beginners on how to run a computer vision application on an embedded device from Himax. This example uses an off-the-shelf Himax WiseEye2 module which is based on the Arm Cortex-M55 and Ethos-U55. + +learning_objectives: + - Run a you-only-look-once (YOLO) object detection model on the edge device + - Build the Himax Software Development Kit (SDK) and generate the firmware image file + - Update the firmware on the edge device (Himax WiseEye2) -learning_objectives: - - Run a you-only-look-once (YOLO) computer vision model using off-the-shelf hardware based on the Arm Cortex-M55 and Ethos-U55. - - Learn how to build the Himax SDK and generate firmware image file. - - Learn how to update firmware on edge device (Himax WiseEye2). - prerequisites: - - Seeed Grove Vision AI V2 Module - - OV5647-62 Camera module and included FPC cable + - A [Seeed Grove Vision AI Module V2](https://www.seeedstudio.com/Grove-Vision-AI-Module-V2-p-5851.html) development board + - A [OV5647-62 Camera Module](https://www.seeedstudio.com/OV5647-69-1-FOV-Camera-module-for-Raspberry-Pi-3B-4B-p-5484.html) and included FPC cable - A USB-C cable - - A Linux/Windows-based PC on an x86 archiecture. + - An x86 based Linux machine or a machine running Apple Silicon author_primary: Chaodong Gong, Alex Su, Kieran Hejmadi ### Tags -skilllevels: Beginner +skilllevels: Introductory subjects: ML armips: - - Cortex M55 - - Ethos U55 + - Cortex-M55 + - Ethos-U55 tools_software_languages: - Himax SDK - - Bash + - Python operatingsystems: - Linux - - Windows + - macOS + +draft: true +cascade: + draft: true ### FIXED, DO NOT MODIFY diff --git a/content/learning-paths/microcontrollers/yolo-on-himax/_review.md b/content/learning-paths/microcontrollers/yolo-on-himax/_review.md deleted file mode 100644 index 27a46683f..000000000 --- a/content/learning-paths/microcontrollers/yolo-on-himax/_review.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -review: - - questions: - question: > - The Grove Vision AI V2 Module can run Yolov8 model in real time? - answers: - - True - - False - correct_answer: 1 - explanation: > - The Grove Vision AI V2 Module can run object detection in real time using the Cortex-M55 and Ethos-U55. - - -# ================================================================================ -# FIXED, DO NOT MODIFY -# ================================================================================ -title: "Review" # Always the same title -weight: 20 # Set to always be larger than the content in this path -layout: "learningpathall" # All files under learning paths have this same wrapper ---- diff --git a/content/learning-paths/microcontrollers/yolo-on-himax/build-firmware.md b/content/learning-paths/microcontrollers/yolo-on-himax/build-firmware.md new file mode 100644 index 000000000..ef89acd6d --- /dev/null +++ b/content/learning-paths/microcontrollers/yolo-on-himax/build-firmware.md @@ -0,0 +1,65 @@ +--- +title: Build the firmware +weight: 3 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +This section will walk you though the process of generating the firmware image file. + +## Clone the Himax project + +Himax has set up a repository containing a few examples for the Seeed Grove Vision AI V2 board. It contains third-party software and scripts to build and flash the image with the object detection application. By recursively cloning the Himax examples repo, git will include the necessary sub-repositories that have been configured for the project. + +```bash +git clone --recursive https://github.com/HimaxWiseEyePlus/Seeed_Grove_Vision_AI_Module_V2.git +cd Seeed_Grove_Vision_AI_Module_V2 +``` + +## Compile the firmware + +For the object detection to activate, you need to edit the project's `makefile`, located in the `EPII_CM55M_APP_S` directory. + +```bash +cd EPII_CM55M_APP_S +``` + +Use the `make` build tool to compile the source code. This should take up to 10 minutes depending on the number of CPU cores available on your host machine. The result is an `.elf` file written to the directory below. + +```bash +make clean +make +``` + +## Generate the firmware image + +The examples repository contains scripts to generate the image file. Copy the `.elf` file to the `input_case1_secboot` directory. + +```bash +cd ../we2_image_gen_local/ +cp ../EPII_CM55M_APP_S/obj_epii_evb_icv30_bdv10/gnu_epii_evb_WLCSP65/EPII_CM55M_gnu_epii_evb_WLCSP65_s.elf input_case1_secboot/ +``` + +Run the script corresponding to the OS of your host machine. This will create a file named `output.img` in the `output_case1_sec_wlcsp` directory. + + +{{< tabpane code=true >}} + {{< tab header="Linux" language="shell">}} +./we2_local_image_gen project_case1_blp_wlcsp.json + {{< /tab >}} + {{< tab header="MacOS" language="shell">}} +./we2_local_image_gen_macOS_arm64 project_case1_blp_wlcsp.json + {{< /tab >}} +{{< /tabpane >}} + +Your terminal output should end with the following. + +```output +Output image: output_case1_sec_wlcsp/output.img +Output image: output_case1_sec_wlcsp/output.img + +IMAGE GEN DONE +``` + +With this step, you are ready to flash the image onto the Himax development board. \ No newline at end of file diff --git a/content/learning-paths/microcontrollers/yolo-on-himax/dev-env.md b/content/learning-paths/microcontrollers/yolo-on-himax/dev-env.md new file mode 100644 index 000000000..450297b75 --- /dev/null +++ b/content/learning-paths/microcontrollers/yolo-on-himax/dev-env.md @@ -0,0 +1,118 @@ +--- +title: Set up environment +weight: 2 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +# Set up the development environment + +This learning path has been validated on Ubuntu 22.04 LTS and macOS. + +{{% notice %}} +If you are running Windows on your host machine, you can use Ubuntu through Windows subsystem for Linux 2 (WSL2). Check out [this learning path](https://learn.arm.com/learning-paths/laptops-and-desktops/wsl2/setup/) to get started. +{{% /notice %}} + +## Install Python, pip and git + +You will use Python to build the firmware image and pip to install some dependencies. Verify Python is installed by running +```bash +python3 --version +``` + +You should see an output like the following. +```output +Python 3.12.7 +``` + +Install `pip` and `venv` with the following commands. + +```bash +sudo apt update +sudo apt install python3-pip python3-venv -y +``` + +check the output to verify `pip` is installed correctly. +``` +pip3 --version +``` + +```output +pip 24.2 from //pip (python 3.12) +``` + +It is considered good practice to manage `pip` packages through a virtual environment. Create one with the steps below. + +```bash +python3 -m venv $HOME/yolo-venv +source $HOME/yolo-venv/bin/activate +``` + +Your terminal displays `(yolo-venv)` in the prompt indicating the virtual environment is active. + +You will need to have the git version control system installed. Run the command below to verify that git is installed on your system. + +```bash +git --version +``` + +You should see output similar to that below. + +```output +git version 2.39.3 +``` + +## Install make + +Install the make build tool, which is used to build the firmware in the next section. + +{{< tabpane code=true >}} + {{< tab header="Linux" language="shell">}} +sudo apt update +sudo apt install make -y + {{< /tab >}} + {{< tab header="MacOS" language="shell">}} +brew install make + {{< /tab >}} +{{< /tabpane >}} + +Successful installation of make will show the following when the `make --version` command is run. + +```output +$ make --version +GNU Make 4.3 +Built for x86_64-pc-linux-gnu +Copyright (C) 1988-2020 Free Software Foundation, Inc. +License GPLv3+: GNU GPL version 3 or later +This is free software: you are free to change and redistribute it. +There is NO WARRANTY, to the extent permitted by law. +``` +{{% notice Note %}} +To run this learning path on macOS, you need to verify that your installation is for the GNU Make - not the BSD version. +{{% /notice %}} +## Install Arm GNU toolchain + +The toolchain is used to compile code from the host to the embedded device architecture. + +{{< tabpane code=true >}} + {{< tab header="Linux" language="shell">}} +cd $HOME +wget https://developer.arm.com/-/media/Files/downloads/gnu/13.2.rel1/binrel/arm-gnu-toolchain-13.2.rel1-x86_64-arm-none-eabi.tar.xz +tar -xvf arm-gnu-toolchain-13.2.rel1-x86_64-arm-none-eabi.tar.xz +export PATH="$HOME/arm-gnu-toolchain-13.2.Rel1-x86_64-arm-none-eabi/bin/:$PATH" + {{< /tab >}} + {{< tab header="MacOS" language="shell">}} +cd $HOME +wget https://developer.arm.com/-/media/Files/downloads/gnu/13.3.rel1/binrel/arm-gnu-toolchain-13.3.rel1-darwin-arm64-arm-none-eabi.tar.xz +tar -xvf arm-gnu-toolchain-13.3.rel1-darwin-arm64-arm-none-eabi.tar.xz +export PATH="$HOME/arm-gnu-toolchain-13.3.rel1-darwin-arm64-arm-none-eabi/bin/:$PATH" + {{< /tab >}} +{{< /tabpane >}} + +{{% notice %}} +You can add the `export` command to the `.bashrc` file. This was, the Arm GNU toolchain is configured from new terminal sessions as well. +{{% /notice %}} + + +Now that your development environment is set up, move on to the next section where you will generate the firmware image. \ No newline at end of file diff --git a/content/learning-paths/microcontrollers/yolo-on-himax/flash-and-run.md b/content/learning-paths/microcontrollers/yolo-on-himax/flash-and-run.md new file mode 100644 index 000000000..78d03dcf5 --- /dev/null +++ b/content/learning-paths/microcontrollers/yolo-on-himax/flash-and-run.md @@ -0,0 +1,96 @@ +--- +title: Flash firmware onto the microcontroller +weight: 4 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +Now that you have generated an image file on the local host machine, you are ready to flash the microcontroller with this firmware. + +## Install xmodem + +`Xmodem` is a basic file transfer protocol which is easily installed using the Himax examples repository. Run the following command to install the dependency. If you cloned the repository to a different location, replace $HOME with the path. + +```bash +cd $HOME/Seeed_Grove_Vision_AI_Module_V2 +pip install -r xmodem/requirements.txt +``` + +## Connect the module + +To prepare for the next steps, it's time to get the board set up. Insert the Flexible printed circuit (FPC) into the Grove Vision AI V2 module. Lift the dark grey latch on the connector as per the image below. + +![unlatched](./unlatched.jpg) + +Then, slide the FPC connector in with the metal pins facing down and close the dark grey latch to fasten the connector. + +![latched](./latched.jpg) + +Now you can connect the Groove Vision AI V2 Module to your computer via the USB-C cable. + +{{% notice Note %}} +The development board may have two USB-C connectors. If you are running into issues connecting the board in the next step, make sure you are using the right one. +{{% /notice %}} + +## Find the COM port + +You'll need to provide the communication port (COM) which the board is connected to in order to flash the image. There are commands to list all COMs available on your machine. Once your board is connected through USB, it'll show up in this list. The COM identifier will start with **tty**, which may help you determine which one it is. You can run the command before and after plugging in the board if you are unsure. + + +{{< tabpane code=true >}} + {{< tab header="Linux" language="shell">}} +sudo grep -i 'tty' /var/log/dmesg + {{< /tab >}} + {{< tab header="MacOS" language="shell">}} +ls /dev/tty.* + {{< /tab >}} +{{< /tabpane >}} + + +{{% notice Note %}} +If the port seems unavailable, try changing the permissions temporarily using the `chmod` command. Be sure to reset them afterwards, as this may pose a computer security vulnerability. + +```bash +chmod 0777 +``` +{{% /notice %}} + +The full path to the port is needed in the next step, so be sure to note it down. + +## Flash the firmware onto the module + +Run the python script below to flash the firmware. + +```bash +python xmodem\xmodem_send.py --port= \ +--baudrate=921600 --protocol=xmodem \ +--file=we2_image_gen_local\output_case1_sec_wlcsp\output.img +``` + +{{% notice Note %}} +When you run other example models demonstrated in the later section [Run additional models in the web toolkit](/learning-paths/microcontrollers/yolo-on-himax/web-toolkit/), you need to adapt this command with `--model` argument. +{{% /notice %}} + +After the firmware image burning is completed, the message `Do you want to end file transmission and reboot system? (y)` is displayed. Press the reset button indicated in the image below. + +![reset button](./reset_button.jpg) + +## Run the model + +After the reset button is pressed, the board will start inference with the object detection automatically. Observe the output in the terminal to verify that the image is built correctly. If a person is in front of the camera, you should see the `person_score` value go over `100`. + +```output +b'SENSORDPLIB_STATUS_XDMA_FRAME_READY 240' +b'write frame result 0, data size=15284,addr=0x340e04e0' +b'invoke pass' +b'person_score:113' +b'EVT event = 10' +b'SENSORDPLIB_STATUS_XDMA_FRAME_READY 241' +b'write frame result 0, data size=15296,addr=0x340e04e0' +b'invoke pass' +b'person_score:112' +b'EVT event = 10' +``` + +This means the image works correctly on the device, and the end-to-end flow is complete. \ No newline at end of file diff --git a/content/learning-paths/microcontrollers/yolo-on-himax/how-to-1.md b/content/learning-paths/microcontrollers/yolo-on-himax/how-to-1.md deleted file mode 100644 index da4626865..000000000 --- a/content/learning-paths/microcontrollers/yolo-on-himax/how-to-1.md +++ /dev/null @@ -1,88 +0,0 @@ ---- -title: Set Up Environment -weight: 2 - -### FIXED, DO NOT MODIFY -layout: learningpathall ---- - -## Set up the Development Environment - -### Step 1.1. Install Ubuntu - -If you are running Windows on your host machine, we recommend using Ubuntu through Windows subsystem for Linux 2 (WSL2). Please see [this learning path](https://learn.arm.com/learning-paths/laptops-and-desktops/wsl2/setup/) for assistance - -This learning path has been validated on Ubuntu 22.04 LTS. However, we expect other linux distributions to work. To verify the Linux distribution you are using you can run the `cat /etc/*release*` command. - -```bash -cat /etc/*release* -``` -The top lines from the terminal output will show the distribution version. - -```output -DISTRIB_ID=Ubuntu -DISTRIB_RELEASE=22.04 -DISTRIB_CODENAME=jammy -DISTRIB_DESCRIPTION="Ubuntu 22.04.5 LTS" -... -``` - -### Step 1.2. (Optional) Install Microsoft Visual Studio Code - -This is only optional. You can use any text editor you are comfortable with to view or edit code. By typing “wsl” in VS Code terminal, you can switch to Linux environment. - -### Step 1.3. Install python 3 - -Go to website python.org to download and install. -Verify python is installed by -python3 --version -You should see an output like the following. -```output -Python 3.12.7 -``` -### Step 1.4. Install python-pip - -```bash -sudo apt update -sudo apt install python3-pip -y -pip3 --version -``` - -If `pip3` is correctly installed you should see an output similar to tht following. - -```output -pip 24.2 from /pip (python 3.12) -``` - -### Step 1.5. Install make - -You will need to install the make build tool in order to build the firmware in the following section. - -```bash -sudo apt update -sudo apt install make -y -``` - -Successful installation of make will show the following when the `make --version` command is run. - -```output -$ make --version -GNU Make 4.3 -Built for x86_64-pc-linux-gnu -Copyright (C) 1988-2020 Free Software Foundation, Inc. -License GPLv3+: GNU GPL version 3 or later -This is free software: you are free to change and redistribute it. -There is NO WARRANTY, to the extent permitted by law. -``` - -### Step 1.6. Install ARM GNU toolchain - -```bash -cd ~ -wget https://developer.arm.com/-/media/Files/downloads/gnu/13.2.rel1/binrel/arm-gnu-toolchain-13.2.rel1-x86_64-arm-none-eabi.tar.xz -tar -xvf arm-gnu-toolchain-13.2.rel1-x86_64-arm-none-eabi.tar.xz -export PATH="$HOME/arm-gnu-toolchain-13.2.Rel1-x86_64-arm-none-eabi/bin/:$PATH" -``` - -Please note: you may want to add the command to your `bashrc` file. This enables the Arm GNU toolchain to be easily accessed from any new terminal session. - diff --git a/content/learning-paths/microcontrollers/yolo-on-himax/how-to-2.md b/content/learning-paths/microcontrollers/yolo-on-himax/how-to-2.md deleted file mode 100644 index 388c390f0..000000000 --- a/content/learning-paths/microcontrollers/yolo-on-himax/how-to-2.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -title: Build The Firmware -weight: 3 - -### FIXED, DO NOT MODIFY -layout: learningpathall ---- - -## Build The Firmware - -Next, we need to build an image that contains the embedded software (firmware). You will need to have the git version control system installed. Run the command below to verify that git is installed on your system. - -```bash -git --version -``` - -You should see output similar to that below. - -```output -git version 2.39.3 -``` - -If not, please follow the steps to install git on your system. - -### Step 2.1. Clone the Himax project - -You will first need to recusively clone the Himax repository. This will also clone the necessary sub repos such as Arm CMSIS. - -```bash -git clone --recursive https://github.com/HimaxWiseEyePlus/Seeed_Grove_Vision_AI_Module_V2.git -cd Seeed_Grove_Vision_AI_Module_V2 -``` - -### Step 2.2. Compile the Firmware - -The make build tool is used to compile the source code. This should take up around 2-3 minutes depending on the number of CPU cores available. - -```bash -cd EPII_CM55M_APP_S -make clean -make -``` - - -### Step 2.3. Generate a Firmware Image - -```bash -cd ../we2_image_gen_local/ -cp ../EPII_CM55M_APP_S/obj_epii_evb_icv30_bdv10/gnu_epii_evb_WLCSP65/EPII_CM55M_gnu_epii_evb_WLCSP65_s.elf input_case1_secboot/ -./we2_local_image_gen project_case1_blp_wlcsp.json -``` - -Your terminal output should end with the following. - -```output -Output image: output_case1_sec_wlcsp/output.img -Output image: output_case1_sec_wlcsp/output.img - -IMAGE GEN DONE -``` diff --git a/content/learning-paths/microcontrollers/yolo-on-himax/how-to-3.md b/content/learning-paths/microcontrollers/yolo-on-himax/how-to-3.md deleted file mode 100644 index b8fde69fd..000000000 --- a/content/learning-paths/microcontrollers/yolo-on-himax/how-to-3.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: Flash Firmware onto the Microcontroller -weight: 3 - -### FIXED, DO NOT MODIFY -layout: learningpathall ---- - -## Flash the Firmware - -Now that we have generated a firmware file on our local machine, we need to flash the microcontroller with this firmware. - -### Step 3.1. Install xmodem. - -`Xmodem` is a basic file transfer protocol. Run the following command to install the dependencies for xmodem. - -```bash -cd $HOME/Seeed_Grove_Vision_AI_Module_V2 # If you cloned the repo to a different location replace $HOME with the path. -pip install -r xmodem/requirements.txt -``` - -### Step 3.2. Connect the module to PC by USB cable. - -You will need to insert the FPC cable cable into the Grove Vision AI V2 module. Lift the dark grey latch on the connector as per the image below. - -![unlatched](./unlatched.jpg) - -Then, slide the FPC connector in with the metal pins facing down and close the dark grey latch to fasten the connector. - -![latched](./latched.jpg) - -Then connect the Groove Vision AI V2 Module to your computer via the USB-C cable. - -### Step 3.4. Flash the firmware onto the moule. - -Run the python script below to flash the firmware. - -```python -python xmodem\xmodem_send.py --port=[your COM number] --baudrate=921600 --protocol=xmodem --file=we2_image_gen_local\output_case1_sec_wlcsp\output.img -``` - - Note: If running one of the other example models demonstrated in '(Optional) Try Different Models', the command might be slightly different. - -After the firmware image burning is completed, the message "Do you want to end file transmission and reboot system? (y)" is displayed. Press the reset button on the module as per the image below. - -![reset button](./reset_button.jpg) diff --git a/content/learning-paths/microcontrollers/yolo-on-himax/how-to-4.md b/content/learning-paths/microcontrollers/yolo-on-himax/how-to-4.md deleted file mode 100644 index 2aa563276..000000000 --- a/content/learning-paths/microcontrollers/yolo-on-himax/how-to-4.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: Run and View Model Results -weight: 3 - -### FIXED, DO NOT MODIFY -layout: learningpathall ---- - - -### Step 4.1. Connect module to PC with USB cable. - -Exit the terminal session and connect the module to the PC via your USB-C cable. - -### Step 4.2. Download the Himax AI web toolkit. - -The Himax AI web toolkit enables a browser-based graphical user interface (GUI) for the live camera feed. - -Download the Himax AI Web toolkit by clicking on this [link](https://github.com/HimaxWiseEyePlus/Seeed_Grove_Vision_AI_Module_V2/releases/download/v1.1/Himax_AI_web_toolkit.zip) - -Unzip the archived file and double click `index.html`. This will open the GUI within your default browser. - -### Step 4.3. Connect to the Grove Vision AI - -Select 'Grove Vision AI(V2)' in the top-right hand corner and press connect button. - -![Himax web UI](./himax_web_ui.jpg) diff --git a/content/learning-paths/microcontrollers/yolo-on-himax/how-to-5.md b/content/learning-paths/microcontrollers/yolo-on-himax/how-to-5.md deleted file mode 100644 index cb6ad788f..000000000 --- a/content/learning-paths/microcontrollers/yolo-on-himax/how-to-5.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -title: (Optional) Try Different Models -weight: 5 - -### FIXED, DO NOT MODIFY -layout: learningpathall ---- - - -### Modify the makefile - -Change the directory to the where the makefile is located. - -```bash -cd $HOME/Seeed_Grove_Vision_AI_Module_V2/EPII_CM55M_APP_S/ # replace $HOME with the location of the project -``` - -Using a text editor, for example visual studio code or nano, modify the `APP_TYPE` field in the makefile from the default value of `allon_sensor_tflm` to one of the values in the table below - - -|APP_TYPE =|Description| -|---|---| -|tflm_folov8_od|Object detection| -|tflm_folov8_pose|Pose detection| -|tflm_fd_fm|Face detection| - -### Regenerate the Firmware Image - -Go back to the 'Build The Firmware' section and start from Step 3.2. to regenerate the firmware image. - -The images below are examples images from the model. - -#### Objection Detection -![object_detection](./object_detection.jpg) - -#### Pose Estimation -![Pose estimation](./pose_estimation.jpg) - -#### Face Detection -![object_detection](./face_detection.jpg) - diff --git a/content/learning-paths/microcontrollers/yolo-on-himax/web-toolkit.md b/content/learning-paths/microcontrollers/yolo-on-himax/web-toolkit.md new file mode 100644 index 000000000..f3ec67106 --- /dev/null +++ b/content/learning-paths/microcontrollers/yolo-on-himax/web-toolkit.md @@ -0,0 +1,94 @@ +--- +title: Run additional models in the web toolkit +weight: 6 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +In this section, you will view a live camera feed with a computer vision application running. + +## Modify the Makefile + +Change the directory to the where the Makefile is located. If you cloned the repository to a different location, replace $HOME with the path. + +```bash +cd $HOME/Seeed_Grove_Vision_AI_Module_V2/EPII_CM55M_APP_S/ +``` + +The table shows the different options available to use with the web toolkit. Modify the `APP_TYPE` field in the `makefile` to one of the values in the table. Then pass the `--model` argument to the python `xmodem` command. + +|APP_TYPE |Description | Model argument | +|--- |--- |--- +|tflm_yolov8_od |Object detection | model_zoo\tflm_yolov8_od\yolov8n_od_192_delete_transpose_0xB7B000.tflite 0xB7B000 0x00000 | +|tflm_fd_fm |Face detection | model_zoo\tflm_fd_fm\0_fd_0x200000.tflite 0x200000 0x00000 model_zoo\tflm_fd_fm\1_fm_0x280000.tflite 0x280000 0x00000 model_zoo\tflm_fd_fm\2_il_0x32A000.tflite 0x32A000 0x00000 | + +{{% notice Note %}} +For `tflm_fd_fm`, you need to pass all three models as separate `--model` arguments. +{{% /notice %}} + + + +## Regenerate the firmware image + +Now you can run `make` to re-generate the `.elf` file. + +```bash +make clean +make +``` + +Use the commands from [Flash firmware onto the microcontroller](/learning-paths/microcontrollers/yolo-on-himax/flash-and-run/) section to run re-generate the firmware image. + +```bash +cd ../we2_image_gen_local/ +cp ../EPII_CM55M_APP_S/obj_epii_evb_icv30_bdv10/gnu_epii_evb_WLCSP65/EPII_CM55M_gnu_epii_evb_WLCSP65_s.elf input_case1_secboot/ +``` +Run the script corresponding to the OS of your host machine. + +{{< tabpane code=true >}} + {{< tab header="Linux" language="shell">}} +./we2_local_image_gen project_case1_blp_wlcsp.json + {{< /tab >}} + {{< tab header="MacOS" language="shell">}} +./we2_local_image_gen_macOS_arm64 project_case1_blp_wlcsp.json + {{< /tab >}} +{{< /tabpane >}} + + +Finally, use `xmodem` to flash the image. + +```bash +python xmodem\xmodem_send.py --port= \ +--baudrate=921600 --protocol=xmodem \ +--file=we2_image_gen_local\output_case1_sec_wlcsp\output.img \ +--model= +``` + +Press the reset button when prompted before moving on. + +## Download the Himax AI web toolkit + +The Himax AI web toolkit enables a browser-based graphical user interface (GUI) for the live camera feed. + +```bash +wget https://github.com/HimaxWiseEyePlus/Seeed_Grove_Vision_AI_Module_V2/releases/download/v1.1/Himax_AI_web_toolkit.zip +unzip Himax_AI_web_toolkit.zip +``` + +Open the unzipped directory in your file browsing system and double click `index.html`. This will open the GUI within your default browser. + +## Connect to the Grove Vision AI + +Select `Grove Vision AI(V2)` in the top-right hand corner and press `Connect` button. Follow the instructions to set up the connection. Now you should see a video feed with a bounding box showing identified objects, poses or face detection. + +![Himax web UI](./himax_web_ui.jpg) + +The images below are captured images from the models run in the toolkit. + +### Objection detection +![object_detection](./object_detection.jpg) + +### Face detection +![object_detection](./face_detection.jpg) +