The official Nvidia Carter reference design has both a Velodyne Lidar and a RealSense camera, and there are sample and tutorial apps for both. We verified everything is working as documented. One issue is that the Lidar can’t see directly in front of the robot and would bump into obstacles that aren’t tall enough for the Lidar’s lowest beams. So we tried sensor fusion with the RealSense camera pointed down right in front of the Carter robot. It almost works, but:
If we use isaac.rgbd_processing.DepthImageFlattening to convert the RealSense depth messages to Flatscan then everything behaves as intended except that the beam end points are perceived as obstacles by the Navigation stack, making Carter hesitate and start/stop even when there’s no obstacle at all
If we use FreespaceFromDepth instead of DepthImageFlattening then there’s a parameter last_range_cell_additional_contribution that can be set to zero to make the beam end points disappear, so that would solve the above issue.
Unfortunately, whenever we use FreespaceFromDepth the whole system gets completely unstable with weird crashes indicating various buffer corruptions, and that’s with the parameters and input/output messages almost identical to what we use for DepthImageFlattening.
We note that there’s no actual example in all of the Isaac SDK samples/tutorials using FreespaceFromDepth, so maybe the documentation has skipped an important parameter ? Or maybe this function wasn’t compiled for Jetpack 4.51 ?
Or is there a way to make the beam end points disappear in DepthImageFlattening ?
We also tried isaac.egm_fusion.EvidenceMapOverlay but were not successful with that either. No crashes, but sending the evidence grid maps from the Lidar and Realsense to map_1 and map_2 results in no output from combined_egm. There’s no example for this function either so we wonder if there’s a misspelling in the parameter names.