cuVSLAM pose jump

When running isaac_ros_vslam for visual mapping, the slam pose and vio pose jump greatly, see rviz2 for details. At this time, the terminal does not show “Visual tracking is lost”, but the terminal shows that some IMU information is lost. Is this an IMU error or an internal motion model calculation error?


The above is the visualization situation of rviz2. The blue path in the figure represents the VIO pose, and the green path represents the VSLAM pose.

The above is the content displayed in the terminal, indicating that some IMU information is lost. How should this prompt be understood?

I face this issue too!

Hi,

Thank you for your post. I’m testing this bug and forwarding this issue to the engineering.
I keep you posted.

Best,
Ahung

Hi,

When cuVLSAM is operating in VIO mode, you’d see a gravity vector (a large arrow pointing downward) in the rviz with default preset from tutorial. This could happen only if number of key frames is enough. The primary purpose of VIO is to sustain tracking temporarily (<10~20 frames) when visual tracking is lost, relying solely on IMU data. Due to noisy accelerometer readings, the VIO solution rapidly degrades without visual input.

What may ease the jump

  1. Change realsense profile to 30FPS in Launch file.
  • ‘depth_module.profile’: ‘640x360x30’

  • [For 90 fps, only 2 (noisy) IMU measurements between images frames, for 30 fps - 6-7 measurements => cuVSLAM get more data between “blind” frames]

  1. Update IMU noise parameters:
  • IMU noise parameters should be evaluated for each physical IMU device, so it may be that the parameters specified in the launch file are not suitable for your physical camera. It’d better to get params on your camera

Please make sure

  • Calm and smooth camera movements at the beginning (sideways, down) to get enough more diverse feature reach keyframes.
  • After ~10-20 sec of movements gravity vector (a huge gray gravity arrow pointing down) should appear = cuVSLAM in Fusion mode.
  • IMU fusion could preserve camera pose tracking for ~0.3-0.5 sec (for 30 fps). Therefore, try to repeat the experiment with faster camera coverage, but without bumping the camera, since IMU is very sensitive

The real solution for long-term camera occlusion scenarios is multicamera mode. This feature is currently available via the ROS API, and we plan to add example with live realsense cameras.

Best,
Ahung