In ROS, the Zed node outputs depth point directly, but I find no such output for the zed camera in Isaac. What is the recommended way of turning stereo imagery to depth image, or ideally an RGBD image?
ZedCamera codelet was meant to be the first part of a subgraph to generate point clouds, but the key stereo disparity codelet is not in Isaac SDK 2020.2 it seems. Your best bet may be to implement your own ZedCamera codelet that takes relies on StereoLabs driver to manage the rectification, disparity derivation, and depth map generation.
The general pipeline for getting RGBD depth images or RGBXYZ pointclouds would be to rectify each image pair from ZED (
StereoRectification codelet can do this here like so
2020.2/sdk/apps/samples/stereo_rectification/stereo_rectification.app.json), then through a disparity generator (semi global matching or other algorithms, see VPI, then disparity to depth (
2020.2/engine/engine/gems/image/utils.hpp::ConvertDisparityToDepth), then generate point cloud from depth and you are finished.
Hi Hemals. For the record, there is no stereo → disparity codelet in SDK 2021.1 either. Would be useful to have.