How to change the image dimension of realsense images in Isaac ROS!

image
These are the various image dimensions I can obtain from the Intel RealSense camera, each corresponding to a different frame rate. I want to adjust the input data frequency accordingly. Could you either suggest a method to modify the input frequency or recommend which image dimensions I should use from the camera to achieve the desired frequency?

These are the topics list I’m receiving related to camera inputs

/realsense2_camera/aligned_depth_to_color/camera_info
/realsense2_camera/aligned_depth_to_color/image_raw
/realsense2_camera/aligned_depth_to_color/image_raw/compressed
/realsense2_camera/aligned_depth_to_color/image_raw/compressedDepth
/realsense2_camera/aligned_depth_to_color/image_raw/nitros
/realsense2_camera/aligned_depth_to_color/image_raw/theora
/realsense2_camera/color/image_raw/compressed
/realsense2_camera/color/image_raw/compressedDepth
/realsense2_camera/color/image_raw/theora
/realsense2_camera/color/metadata
/realsense2_camera/depth/camera_info
/realsense2_camera/depth/image_rect_raw
/realsense2_camera/depth/image_rect_raw/compressed
/realsense2_camera/depth/image_rect_raw/compressedDepth
/realsense2_camera/depth/image_rect_raw/theora
/realsense2_camera/depth/metadata
/realsense2_camera/extrinsics/depth_to_color

Intel Realsense Camera - D435
Hardware - Nvidia Orin Agix

Hi @sarthakgarg0303

To change the camera sensor, you need to modify the specific roslaunch.py file for the desired demo.

For example, if you are working with Isaac ROS vslam, the file: isaac_ros_visual_slam_realsense.launch.py there is an option to change the camera setup

If you are working with Isaac ROS pose estimation the camera resolution is set to these lines:

But remember that the pose estimation demo is trained at 640x480. If you want to work at other resolutions, you need to update also the model.

Best,
Raffaello

Thank you @Raffaello , I’m working with foundationpose pose estimation using Isaac ROS, I changed the yaml file to get the desired inputs from realsense camera. The depth and camera_info are obtained at 60hz but still the rgb inputs are not produced at 60 hz as explicitly mentioned, they are obtained at somewhere around 40hz.

And when I run this command to get the inputs from camera, I’m getting all the three inputs of depth, rgb, camera info all at 60 hz.

ros2 launch realsense2_camera rs_launch.py depth_module.depth_profile:=640x480x60 rgb_camera.color_profile:=640x480x60

This is the jetson consumption!

Hi @sarthakgarg0303

Can you please share the launch file needed to run this demo? Also, please double-check if you have modified the correct yaml file.

I noticed that your Jetson is running other software that may be consuming CPU and GPU. Please disable any other running applications.

Best,
Raffaello