[visual_slam_node]: Unknown Tracker Error 2

Hi, I am trying to get the cuVSLAM running on my jetson orin, running the latest Jetpack6. I am operating in the dev container as outlined in the docs.

I have 2 cameras arranged in a stereo stereo, and I am using the isaac_ros packages to change the color format( from mono8 to rgb8) , rectify the images and then finally feeding them into the vslam node.

The visual slam package throws the error Unknown Tracker Error 2. Detailed output attached herein.
output_slam.txt (19.3 KB)

I can’t really find anything online around the error. Please recommend a path forward.

Hi @SamsaaraLAM

Thank you for your post and for your detailed message. I have reviewed your log and I don’t see any strange errors, only:

[component_container_mt-1] [WARN] [1724305605.639673581] [visual_slam_node]: Unknown Tracker Error 2
[component_container_mt-1] [WARN] [1724305605.641408241] [visual_slam_node]: Unknown Tracker Error 2
[component_container_mt-1] [WARN] [1724305605.693952345] [visual_slam_node]: Unknown Tracker Error 2
[component_container_mt-1] [WARN] [1724305605.698713347] [visual_slam_node]: Unknown Tracker Error 2

I have forwarded your issue to the engineering team and will reply as soon as possible.

Raffaello

Hi Raffaello,

Thank you for the quick response! I enabled the debugging option in the launch script, and got another hint to the possible issue. Here is the output:
[component_container_mt-1] [ERROR] CUVSLAM_INVALID_ARG(2U) Condition failed: images[i].width == resolution[0] [component_container_mt-1] [WARN] [1724364418.255534505] [visual_slam_node]: Unknown Tracker Error 2

I tried to look into the source code but couldn’t find anything substantial. I am feeding rectified images at 1920 X 1080 resolution to the VSLAM node. So not sure what is the resolution mismatch.

Thank you

Hi @SamsaaraLAM thank you for your reply, I have more details related to this bug:

This bug indicates an invalid stereopair, potentially originating from:

  • time sync
  • FOVs overlapping
  • wrong camera params

If it is possible, can you share a rosbag with the following topics: /tf /tf_static /raw_image_left /raw_image_right /raw_camera_info_left /raw_camera_info_right

About the topic names may vary, depending on how you named them. However, the intent of “raw” refers to the output from the camera before applying the black and white and rectification nodes.

Hi Raffaello,

Thank you for the guidance. I was able to resolve the issue by subscribing to the correct camera info topic. However, I am now encountering a new problem. I’m seeing the following warnings in the terminal:

[component_container_mt-1] [WARN] [1724647623.057695886] [visual_slam_node]: Delta between current and previous frame [116.908749 ms] is above threshold [50.000000 ms]

[component_container_mt-1] [WARNING] Delta between frame is 174 ms that is longer than desired 50 ms. Check camera fps and sync settings.

The frame delta has been observed to reach up to 250 ms. I know there’s a parameter to adjust this delta, but when I check the frame rate using ros2 topic hz on the image topic that the VSLAM node subscribes to, it reports around 30 FPS outside the container and 25 FPS inside the container.

Is there any way to address this issue? I found a related discussion here.

The raw frame rate from the camera is 51 FPS.

Thanks in advance!

Hi Raffaello,

I was able to get the cuVSLAM to run, but looks like the odometery produced is very poor, The base_link, when visualised in RVIZ2, drifts away after a few minutes and behaves erratically. I am using hardware triggered Flir Chameleon cameras as mentioned before. Here is a rosbag2 which has the raw camera info and raw images, and the tf topic.

I still get the jitter in frames warning.

When I tried to visualise the debug images that cuVSLAM stores in /tmp/cuvslam folder, the seemed very strange. The images were cropped and stretched, but maintained the image resolution(1920 X1080). I am providing rectified images to it, so not sure why the images were cropped and stretched.

Can you please guide me in the right direction? Any help will be extremely appreciated.

Kind regards

Hi @SamsaaraLAM I’m back with a reply from your previous post

It’s not clear to me how it’s possible that the frequency of the image topic differs from the original frequency of the camera.

  1. maybe the image_proc pipeline is not working fast enough, but in this case, I recommend lowering the camera fps or abandoning this pipeline if it is used only for vslam, since cuVSLAM can work on RAW colored images
  2. You can also examine if cuVSLAM is working fast enough with provided input data by logging topic visual_slam/status and analyzing values of node_callback_execution_time (time of callback the whole node, maybe affected by enabled visualization, path collecting and etc) & track_execution_time (the pure time of VO tracking). If the latter is consistently greater than the frame time difference for the corresponding camera FPS, then it is recommended that the user reduce the camera FPS.

It is not recommended to overload the input image queue, as this can result in random image drops by ROS. This leads to significant fluctuations in input frame timedeltas and visual odometry instability.

Hi @Raffaello ,

Thank you for the answer. The camera driver delivers BayerRG8 format images at 55fps, but if it is set to output RGB8( supported by Isaac), the FPS drops to 25, hence we had to use mono8(51 FPS).

However, here are some issues that are being faced:

Rectification Node

If mono8 images are sourced to the node, it complains about the pixel format and doesn’t rectify the images. So we need to convert mono to RGB8 before feeding it to the node.

So the rectification node outputs RGB rectified images

VSLAM node

As shared before, when RGB rectified images were sourced to the cuVSLAM node, the odometery is erratic and the node seems to stop intermittently. When taking a look at the the debug images, it was found that they were cropped and stretched in mono8 pixel format. We realized that the cuVSLAM reads the incoming image as Mono8 and hence, the cropped and stretched images.

So converting the rectified RGB images to mono8 again before cuVSLAM seems to be the only way.

Please recommend a solution. Thank you in advance.