Skip to content

Commit

Permalink
docs: update package names
Browse files Browse the repository at this point in the history
Signed-off-by: Taekjin LEE <taekjin.lee@tier4.jp>
  • Loading branch information
technolojin committed Nov 20, 2024
1 parent 476ee27 commit c3524ce
Show file tree
Hide file tree
Showing 12 changed files with 24 additions and 24 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ Autoware has the following two types of parameter files for ROS packages:

The schema file path is `INSERT_PATH_TO_PACKAGE/schema/` and the schema file name is `INSERT_NODE_NAME.schema.json`. To adapt the template to the ROS node, replace each `INSERT_...` and add all parameters `1..N`.

See example: _Lidar Apollo Segmentation TVM Nodes_ [schema](https://github.com/autowarefoundation/autoware.universe/blob/main/perception/lidar_apollo_segmentation_tvm_nodes/schema/lidar_apollo_segmentation_tvm_nodes.schema.json)
See example: _Image Projection Based Fusion - Pointpainting_ [schema](https://github.com/autowarefoundation/autoware.universe/blob/main/universe/perception/autoware_image_projection_based_fusion/schema/pointpainting.schema.json)

### Attributes

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -10,28 +10,28 @@ This diagram describes the pipeline for radar faraway dynamic object detection.

### Crossing filter

- [radar_crossing_objects_noise_filter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/radar_crossing_objects_noise_filter)
- [radar_crossing_objects_noise_filter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_radar_crossing_objects_noise_filter)

This package can filter the noise objects crossing to the ego vehicle, which are most likely ghost objects.

### Velocity filter

- [object_velocity_splitter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/object_velocity_splitter)
- [object_velocity_splitter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_object_velocity_splitter)

Static objects include many noise like the objects reflected from ground.
In many cases for radars, dynamic objects can be detected stably.
To filter out static objects, `object_velocity_splitter` can be used.

### Range filter

- [object_range_splitter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/object_range_splitter)
- [object_range_splitter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_object_range_splitter)

For some radars, ghost objects sometimes occur for near objects.
To filter these objects, `object_range_splitter` can be used.

### Vector map filter

- [object-lanelet-filter](https://github.com/autowarefoundation/autoware.universe/blob/main/perception/detected_object_validation/object-lanelet-filter.md)
- [object-lanelet-filter](https://github.com/autowarefoundation/autoware.universe/blob/main/perception/autoware_detected_object_validation/object-lanelet-filter.md)

In most cases, vehicles drive in drivable are.
To filter objects that are out of drivable area, `object-lanelet-filter` can be used.
Expand All @@ -41,12 +41,12 @@ Note that if you use `object-lanelet-filter` for radar faraway detection, you ne

### Radar object clustering

- [radar_object_clustering](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/radar_object_clustering)
- [radar_object_clustering](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_radar_object_clustering)

This package can combine multiple radar detections from one object into one and adjust class and size.
It can suppress splitting objects in tracking module.

![radar_object_clustering](https://raw.githubusercontent.com/autowarefoundation/autoware.universe/main/perception/radar_object_clustering/docs/radar_clustering.drawio.svg)
![radar_object_clustering](https://raw.githubusercontent.com/autowarefoundation/autoware.universe/main/perception/autoware_radar_object_clustering/docs/radar_clustering.drawio.svg)

## Note

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ In detail, please see [this document](faraway-object-detection.md)

### Radar fusion to LiDAR-based 3D object detection

- [radar_fusion_to_detected_object](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/radar_fusion_to_detected_object)
- [radar_fusion_to_detected_object](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_radar_fusion_to_detected_object)

This package contains a sensor fusion module for radar-detected objects and 3D detected objects. The fusion node can:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,20 +43,20 @@ Radar can detect x-axis velocity as doppler velocity, but cannot detect y-axis v

### Message converter

- [radar_tracks_msgs_converter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/radar_tracks_msgs_converter)
- [radar_tracks_msgs_converter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_radar_tracks_msgs_converter)

This package converts from `radar_msgs/msg/RadarTracks` into `autoware_auto_perception_msgs/msg/DetectedObject` with ego vehicle motion compensation and coordinate transform.

### Object merger

- [object_merger](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/object_merger)
- [object_merger](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_object_merger)

This package can merge 2 topics of `autoware_auto_perception_msgs/msg/DetectedObject`.

- [simple_object_merger](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/simple_object_merger)
- [simple_object_merger](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_simple_object_merger)

This package can merge simply multiple topics of `autoware_auto_perception_msgs/msg/DetectedObject`.
Different from [object_merger](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/object_merger), this package doesn't use association algorithm and can merge with low calculation cost.
Different from [object_merger](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_object_merger), this package doesn't use association algorithm and can merge with low calculation cost.

- [topic_tools](https://github.com/ros-tooling/topic_tools)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ For convenient use of radar pointcloud within existing LiDAR packages, we sugges
For considered use cases,

- Use [pointcloud_preprocessor](https://github.com/autowarefoundation/autoware.universe/tree/main/sensing/pointcloud_preprocessor) for radar scan.
- Apply obstacle segmentation like [ground segmentation](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/ground_segmentation) to radar points for LiDAR-less (camera + radar) systems.
- Apply obstacle segmentation like [ground segmentation](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_ground_segmentation) to radar points for LiDAR-less (camera + radar) systems.

## Appendix

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ uint16 BICYCLE = 32006;
uint16 PEDESTRIAN = 32007;
```

For detail implementation, please see [radar_tracks_msgs_converter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/radar_tracks_msgs_converter).
For detail implementation, please see [radar_tracks_msgs_converter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_radar_tracks_msgs_converter).

## Note

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -788,7 +788,7 @@ if you decided to use container for 2D detection pipeline are:
for example, we will use `/perception/object_detection` as tensorrt_yolo node namespace,
it will be explained in autoware usage section.
For more information,
please check [image_projection_based_fusion](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/image_projection_based_fusion) package.
please check [image_projection_based_fusion](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_image_projection_based_fusion) package.

After the preparing `camera_node_container.launch.py` to our forked `common_sensor_launch` package,
we need to build the package:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,7 @@ but if you want to use `camera-lidar fusion` you need to change your perception
If you want to use traffic light recognition and visualization,
you can set `traffic_light_recognition/enable_fine_detection` as true (default).
Please check
[traffic_light_fine_detector](https://autowarefoundation.github.io/autoware.universe/main/perception/traffic_light_fine_detector/)
[traffic_light_fine_detector](https://autowarefoundation.github.io/autoware.universe/main/perception/autoware_traffic_light_fine_detector/)
page for more information.
If you don't want to use traffic light classifier, then you can disable it:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ that we want
to change it since `tier4_perception_component.launch.xml` is the top-level launch file of other perception launch files.
Here are some predefined perception launch arguments:

- **`occupancy_grid_map_method:`** This argument determines the occupancy grid map method for perception stack. Please check [probabilistic_occupancy_grid_map](https://autowarefoundation.github.io/autoware.universe/main/perception/probabilistic_occupancy_grid_map/) package for detailed information.
- **`occupancy_grid_map_method:`** This argument determines the occupancy grid map method for perception stack. Please check [probabilistic_occupancy_grid_map](https://autowarefoundation.github.io/autoware.universe/main/perception/autoware_probabilistic_occupancy_grid_map/) package for detailed information.
The default probabilistic occupancy grid map method is `pointcloud_based_occupancy_grid_map`.
If you want to change it to the `laserscan_based_occupancy_grid_map`, you can change it here:

Expand All @@ -47,7 +47,7 @@ Here are some predefined perception launch arguments:
```

- **`detected_objects_filter_method:`** This argument determines the filter method for detected objects.
Please check [detected_object_validation](https://autowarefoundation.github.io/autoware.universe/main/perception/detected_object_validation/) package for detailed information about lanelet and position filter.
Please check [detected_object_validation](https://autowarefoundation.github.io/autoware.universe/main/perception/autoware_detected_object_validation/) package for detailed information about lanelet and position filter.
The default detected object filter method is `lanelet_filter`.
If you want to change it to the `position_filter`, you can change it here:

Expand All @@ -57,7 +57,7 @@ Here are some predefined perception launch arguments:
```

- **`detected_objects_validation_method:`** This argument determines the validation method for detected objects.
Please check [detected_object_validation](https://autowarefoundation.github.io/autoware.universe/main/perception/detected_object_validation/) package for detailed information about validation methods.
Please check [detected_object_validation](https://autowarefoundation.github.io/autoware.universe/main/perception/autoware_detected_object_validation/) package for detailed information about validation methods.
The default detected object filter method is `obstacle_pointcloud`.
If you want to change it to the `occupancy_grid`, you can change it here,
but remember it requires `laserscan_based_occupancy_grid_map` method as `occupancy_grid_map_method`:
Expand Down Expand Up @@ -99,7 +99,7 @@ we will apply these changes `tier4_perception_component.launch.xml` instead of `
Here are some example changes for the perception pipeline:

- **`remove_unknown:`** This parameter determines the remove unknown objects at camera-lidar fusion.
Please check [roi_cluster_fusion](https://github.com/autowarefoundation/autoware.universe/blob/main/perception/image_projection_based_fusion/docs/roi-cluster-fusion.md) node for detailed information.
Please check [roi_cluster_fusion](https://github.com/autowarefoundation/autoware.universe/blob/main/perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion.md) node for detailed information.
The default value is `true`.
If you want to change it to the `false`,
you can add this argument to `tier4_perception_component.launch.xml`,
Expand Down
2 changes: 1 addition & 1 deletion docs/how-to-guides/others/running-autoware-without-cuda.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Autoware Universe's object detection can be run using one of five possible confi
- `lidar-centerpoint` + `tensorrt_yolo`
- `euclidean_cluster`

Of these five configurations, only the last one (`euclidean_cluster`) can be run without CUDA. For more details, refer to the [`euclidean_cluster` module's README file](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/euclidean_cluster).
Of these five configurations, only the last one (`euclidean_cluster`) can be run without CUDA. For more details, refer to the [`euclidean_cluster` module's README file](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_euclidean_cluster).

## Running traffic light detection without CUDA

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,14 +26,14 @@ the readme file accompanying **"traffic_light_classifier"** package. These instr
the process of training the model using your own dataset. To facilitate your training, we have also provided
an example dataset containing three distinct classes (green, yellow, red), which you can leverage during the training process.

Detailed instructions for training the traffic light classifier model can be found **[here](https://github.com/autowarefoundation/autoware.universe/blob/main/perception/traffic_light_classifier/README.md)**.
Detailed instructions for training the traffic light classifier model can be found **[here](https://github.com/autowarefoundation/autoware.universe/blob/main/perception/autoware_traffic_light_classifier/README.md)**.

## Training CenterPoint 3D object detection model

The CenterPoint 3D object detection model within the Autoware has been trained using the **[autowarefoundation/mmdetection3d](https://github.com/autowarefoundation/mmdetection3d/blob/main/projects/AutowareCenterPoint/README.md)** repository.

To train custom CenterPoint models and convert them into ONNX format for deployment in Autoware, please refer to the instructions provided in the README file included with Autoware's
**[lidar_centerpoint](https://autowarefoundation.github.io/autoware.universe/main/perception/lidar_centerpoint/)** package. These instructions will provide a step-by-step guide for training the CenterPoint model.
**[lidar_centerpoint](https://autowarefoundation.github.io/autoware.universe/main/perception/autoware_lidar_centerpoint/)** package. These instructions will provide a step-by-step guide for training the CenterPoint model.

In order to assist you with your training process, we have also included an example dataset in the TIER IV dataset format.

Expand Down

0 comments on commit c3524ce

Please sign in to comment.