This repository is dedicated to showcasing some of my major and mini-Robotics projects done as a part of courseware and Robotics Nanodegree. Each project has links to its corresponding project repo and README.
This project uses the Point Cloud Library (PCL). PCL is open project for 2D/3D image and point cloud processing. The PCL API documentation here , contains details of implementing many state-of-the-art algorithms using filtering , feature estimation, surface reconstruction and segmentation.
The project comprises of using the "RANdom SAmple Consensus" (RANSAC) algorithm to create point-clouds of random objects/things kept on a cluttered table.A successful implementation of the algorithm results in creating new point clouds containing the table and objects separately. The README corresponding to the project repo briefly explains the process and the filtering technique used, more can be found here - python-pcl.
Click π for Project Details
The project creates a ROS Node for image segmentation for objects kept on the cluttered table from the pcd
's used in the RANSAC plane fitting project above. The setup is run and tested on the RoboND simulator Environment provided by Udacity. The segmentation has been visualized in Rviz
by selecting the /pcl_objects
topic.
A successful implementation of the cluster based segmentation using Voxel Grid filtering results in assigning a different colour to each custer.
The Gazebo
setup for the same table top consists of a stick robot with an RBG-D camera attached to its head.
Click π for Project Details
This project uses scripts and concepts from RANSAC plane fitting and Image segmentation to ultimately recognize objects on the cluttered table. A SVM classifier is trained to detect the objects kept on the cluttered table.
Click π for Project Details
This project gives the PR2 robot the ability to locate an object in a cluttered environment, pick it up and then move it to some other location. This is an interesting problem to solve and is a challenge at the forefront of the robotics industry today. The project uses a perception pipeline , here and here, to identify target objects from a so-called βPick-Listβ in that particular order, pick up those objects and place them in corresponding drop-boxes.
Performing image segmentation and object detection are two important parts of the project forming the perception pipeline for the PR2 Robot. The control/Actuator part is handled by calling a PR2 mover function
, which then calls the pr2_mover
to pick and place detected objects.
The project deals with three test world scenarios one of whose results is shown in the below image
Click π for Project Details
A mini-project build as a part of major SLAM project which uses ROS and Gazebo to build a mobile robot for chasing a white ball in a room.
Click π for Project Details
SLAM(Simultaneous Localization and mapping) (Nvidia Article here) is a technology where a robot builds a map representing its spatial environment while keeping a track of its position within the built map. Real-time apperance based mapping (RTAB) is RGB-D , stereo , Graph based SLAM approach based on incremental appearance-based loop closure detector. More here.
This project is about implementing SLAM(Simultaneous Localization and mapping) with RTAB-MAP(Real-Time Appearance-Base Mapping) package. Two 2D occupancy grid and a 3D octomap is created from a simulated environment. The results show the robot successfully localizing and mapping itself in the surrounding.