We are trying to build autonomous vehicle. we want to fuse distance data from lidar with object detection from camera. I searched for topics like that using isaac ros but couldn’t find. We are new so can someone suggest us a correct way to get this done?
You’ll want to find the extrinsics between your lidar and your camera first (the post of the camera in the frame of the lidar). You’ll need to match camera pixels to lidar beams using the intrinsics (calibrated) of the camera image and the lidar (angles) to “colorize” the point cloud. With this association, you can use Isaac ROS Image Segmentation or Isaac ROS Object Detection to perform the detection in camera image pixel space, then use your mapping to find the depth points in that image.
An alternative way of thinking about this is you take the bounding box in pixel space and project into the scene using the intrinsics of your imager to find all of the points from the lidar that are contained within it.