Isaac ROS package support for Time-of-Flight Depth cameras

I own a time-of-flight depth camera and I wish to use Isaac ROS packages to implement features like object detection, pose estimation, etc.

I have enumerated my depth camera as an argus camera and successfully obtained stream in mono8 format using isaac_ros_argus_camera package.

Can anyone suggest me which package I can use to implement above mentioned features?

The existing Isaac ROS packages for object detection and pose estimation all rely on an input RGB stream from a color camera; the packages do not support a depth-only input stream.

If you have a specific object you’d like to detect or find the pose of, you could train a custom model using the NVIDIA TAO Toolkit, and then run that model on your robot using the Isaac ROS DNN Inference repository’s packages.

Thank you for the input.

I have followed the steps in Isaac ROS Argus Camera package to save frames. Is there any way I can save the image as raw frame?

Following are the commands I used in the docker environment:

Publisher node :
ros2 launch isaac_ros_argus_camera isaac_ros_argus_camera_mono_launch.py

Consumer node :
ros2 run image_view image_saver --ros-args -r image:=/left/image_raw -p filename_format:=“image.jpg”

When I try to save image.jpg as image.raw I get an error in loadsave.cpp in Imwrite() function.

Any suggestions?

What do you mean by “raw frame”? The ROS 2 image_view image_saver tool uses the filename format to determine what format to save the received image message as. If you specify a filename that ends in a nonstandard file extension like image.raw, this tool will be unable to understand the requested format and throw an error.