Isaac Ros VSlam localization and imu questions

Hello guys,

Here’s the setup I experimented ISAAC Visual Slam on.

  • Xavier AGX (Jetpack 5.0.2)
  • Ubuntu 20.04
  • ROS Foxy
  • ISAAC Visual Slam DP 1.1 (installed from source)
  1. Having localization_n_mapping set to true, I suddenly get an error that looks like: [Timestamp] Error addr X. What does this error mean/what causes it? These errors are printed randomly without affecting the flow of mapping/vo.

Error x screenshot 1

  1. When the camera_link tf gets sudden jumps (cause by incorrect localization - 2-3 meters high) and I try to do the localization in a known point (by using visual_slam/load_map_and_localize service on a stored map), in console it shows that the localization was done successfully (and it prints the right coordinates). At the same time, nothing happens and an error is also printed: poseGraph->Reindex() failed. What exactly causes this error? Any fix for this?
    If elbrus is reset first (by using visual_slam/reset service) and I do the exact same steps, I will get the expected behavior: load map and localize successfully.

  2. If the camera is not perpendicular to the ground, what’s the correct way to set the gravitational_force vector on launch? Here’s how the parameters were set for VIO tests (with camera slightly pointing to the ground):

                    'enable_imu': True,
                    'map_frame': 'map',
                    'odom_frame': 'odom',   
                    'base_frame': 'camera_link',
                    'input_left_camera_frame': 'camera_infra1_frame',
                    'input_right_camera_frame': 'camera_infra2_frame',
                    'input_imu_frame': 'camera_imu_optical_frame',
                    'path_max_size': 5000,
                    'gravitational_force': [0.8429, 6.0940, 8.5260]

And also here’s how the gravitational vector looks like:

Is that the expected behavior for the gravity vector?

  1. Considering that the screenshot above illustrates the expected behavior, how can the gravitational vector be used to determine the real orientation of the camera_link (if the camera points to the ground, the camera_link tf should do the same) ?

Thank you and have a nice day!


Could you please share your launch file? It looks like there are additional nodes in the launch file which produces the “X” error

To set the gravity vector, you need to fill out the vector which reports the orientation of imu in the base_link frame (x - forward, y - left, and z - up). See the code here.

Hi there,

Thank you for response! Here’s the launch file for the first question above. The camera is not launched by using the file below. I do it separately.

import launch
from launch_ros.actions import ComposableNodeContainer
from launch_ros.descriptions import ComposableNode

def generate_launch_description():
    """Launch file to bring up visual slam node standalone."""
    visual_slam_node = ComposableNode(
        remappings=[('stereo_camera/left/image', '/camera/infra1/image_rect_raw'),
                    ('stereo_camera/left/camera_info', '/camera/infra1/camera_info'),
                    ('stereo_camera/right/image', '/camera/infra2/image_rect_raw'),
                    ('stereo_camera/right/camera_info', '/camera/infra2/camera_info'),
                    ('visual_slam/imu', '/camera/imu')
                    'enable_rectified_pose': True,
                    'denoise_input_images': False,
                    'rectified_images': True,
                    'enable_debug_mode': False,
                    'debug_dump_path': '/tmp/elbrus',
                    'enable_slam_visualization': True,
                    'enable_landmarks_view': True,
                    'enable_observations_view': True,
                    'enable_localization_n_mapping': True,
                    'enable_imu': True,
                    'map_frame': 'map',
                    'odom_frame': 'odom',   
                    'base_frame': 'camera_link',
                    'input_left_camera_frame': 'camera_infra1_frame',
                    'input_right_camera_frame': 'camera_infra2_frame',
                    'input_imu_frame': 'camera_imu_optical_frame',
                    'path_max_size': 5000,
                    'gravitational_force': [0.8429, 6.0940, 8.5260]

    visual_slam_launch_container = ComposableNodeContainer(

    return launch.LaunchDescription([visual_slam_launch_container])

I was curious if the picture above represents the expected results. Also, what about **poseGraph->Reindex() ** error? Do you have any idea what causes it?

Thank you once again!

How did you calculate the values that you’ve specified for the gravitational_force parameter? It seems that the norm of that vector is 10.5, which doesn’t match the expected value of g=9.81 m/s^2.

You mentioned earlier that your camera is not aligned to the ground plane. However, in your RViz screenshot, it appears that the tf2 tree is broken and that the camera frame is aligned to the ground plane.

Could you provide some more information about how the frames in your transform tree are specified?

We’re still investigating the poseGraph->Reindex() error you described.

Hi there!
The values specified for gravitational_force parameter are the ones provided by camera’s IMU. I took the three values corresponding to linear_acceleration from the published topic of camera imu. As you mentioned above, sometimes the norm of vector is not exactly 9.81

The camera in the screenshot above was positioned at about 45 degrees to the ground plane. When I launched the VSLAM node I was expecting that camera_link’s tf to be oriented at 45 degrees on pitch (similar to its ‘real’ orientation). I was curious about tfs orientation because if the camera is not oriented parallel to the ground, the mapping process won’t be done properly - oriented to a upper/lower angle. The links between tfs are map->odom->camera_link->imu frames, infra frames and their optical frames.

Hello! Any updates on the problem described above? At the same time, if I want to do multiple localisation on multiple points on the same map, is there a possibility to cache the map and start the localisation afterwards (instead of loading the map from disk every time I do the localisation). Thank you in advance and have a nice day!

Which camera and IMU are you using by the way? We haven’t reproduced this issue yet ourselves.

I’m not sure I understood multiple localization on “multiple points on the same map”, but you could mmap the output file to a RAM disk to make loading that much faster if that’s an issue for you. Isaac ROS Visual SLAM loads its landmark pose graph, localizes itself in the map with a hint, and then continues to report its pose as you move, so you do not need to continuously call LoadMapAndLocalize() if that’s what you mean.

Thank you for you answer!

Indeed, my question is about localizing in an already mapped environment, but without knowing the position (robot is somewhere in the warehouse, for sure). When doing this, the localization request in most points will fail, and the map will be loaded from disk for each attempt. You mentioned about mmap, but what I’ve seen here, ELBRUS_LocalizeInExistDb gets a c_str as argument. Could you share any resources/examples on how mmap would be used? This would possibly greatly increase the time to a successful localization, which is definetly really useful.
As for the camera and IMU, the experimental setup uses a Realsense D455 with its IMU, but adding an external IMU (bno055 or similar, for example) is definitely possible.

You could use a ramdisk (described here) and use the filename from this location to load from. This would be faster than loading from disk but you still have the deserialization overhead on each attempt.

We’ll look at adding a more efficient “multiple guesses” option that does not require reloading the map repeatedly. Thanks for letting us know!