This library is designed to facilitate the integration and management of cameras (acquisition side) within a ROS2 application. It offers a Camera
class that simplifies capturing frames from both RGB and depth cameras and supports custom processing through post-processing functions. By using this library, vision application developers can avoid managing topic subscriptions and camera communication directly. Instead, they can utilize a straightforward API to access the necessary data—such as color frames, depth frames, or camera info—and simply provide post-processing code, which the library calls in real time as needed.
- 🎥 Frame Acquisition: Support for acquiring color and depth frames from ROS2-compatible cameras. There are two methods for acquiring frames: using APIs (as one-shot class methods) or starting the acquisition/processing phase (loop); the class is also a ROS2 launchable node.
- 🛠️ Post-Processing: The ability to apply a post-processing function to the acquired frames.
- 🔧 ROS2 Integration: Uses ROS2 topics and parameters to manage data.
To install the library, clone the repository and build it using colcon (inside src folder of your workspace):
git clone https://github.com/SamueleSandrini/vision_system
cd vision_system
pip install -r requirements.txt
cd ..
colcon build --symlink-install
The Camera class serves as a ROS2 node and can be used both directly as a regular class with its API methods or run as a ROS2 node. Below is a summary of the main API methods available in the class:
It follows a table summarizing the main class API method.
Method | Description |
---|---|
retrieve_camera_info |
Retrieves the camera information and stores it in the class. |
acquire_color_frame_once |
Acquires a single color frame. |
acquire_frames_once |
Acquires a single pair of color and depth frames. |
process_once |
Processes a single pair of color and depth frames using the provided post-processing function. |
set_processing_function |
Dynamically loads and sets the post-processing function. |
start_acquire |
Starts the synchronized acquisition of color and depth frames, with post-processing. |
start_acquire_only_color |
Starts the acquisition of color frames only, with post-processing. |
get_frames |
Retrieves the most recently acquired color and depth frames. |
get_color_frame |
Retrieves the most recently acquired color frame. |
get_distance_frame |
Retrieves the most recently acquired depth frame. |
get_camera_info |
Retrieves the camera information. |
get_frame_id |
Retrieves the frame ID from the camera information. |
Additionally, you can configure a custom post-processing function for the acquisition process. This is useful to avoid rewriting the same code repeatedly by simply adding the custom function to the pipeline. Examples of post-processing functions include object detection with YOLO, broadcasting transforms, displaying images with cv2.imshow
, etc. To do this, you need to create a CustomPostProcessing
class that inherits from PostProcessing, which is an abstract class with two methods that must be implemented:
process_frames(self, color_frame, distance_frame)
: This method is automatically called insideprocess_once
orstart_acquire
.process_frame(self, color_frame)
: This method is used only whenstart_acquire_only_color
is employed.
If any of these methods are not required, you can implement them as empty methods (using pass
).
The module vision_system_utils.py
provides several utilities, such as methods to convert pixel coordinates and depth into corresponding 3D points.
You can find an example of usage of Camera Node and of PostPorcessing definition here: Examples
.
If you want to use the already defined node, use the config file to set the topic name and others config, and use the launch file:
ros2 launch vision_system vision_system.launch.py
Each contribution, especially about utilities, is really welcome ;).
Feel free to open an issue if something is not working or if there can be some feature that can be useful.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.