How does Sight's windows know how to translate the local map over the global map in Flatsim example application?

My team and I are developing an application based on ~/isaac/sdk/packages/flatsim/apps:flatsim and we need to extract the local map generated by the Navigation Stack.
We have noticed that the local map generated by the “evidence_grid_map.subgraph.json” subgraph, in particular from the source “evidence_local_map/isaac.egm_fusion.EvidenceMapFusion/evidential_local_map”, is jointed to the robot.
We would like to have it jointed with the reference frame of the global map and, in order to rotate the local map, we have used the Odometry of the robot read from the “differential_base_navigation.subgraph.json” subgraph, from the source “imu_odometry.subgraph/interface/odometry”.
We would also like to translate the local map over the global map as it works on Sight’s windows but we haven’t managed to do it yet.

Our problem and questions:

  • By reading the Odometry of the robot as we do for the rotation, the translation along x and y axis has the origin of the reference frame in the robot starting position, so it is not useful to translate the local map.
  • How can we obtain the position of the robot with reference to the global map?
  • Is it a good idea to retrieve the robot starting position and sum the odometry? If yes, how can the starting position be retrieved, in a way which does not depend on the underlying simulator?
  • By reading the Odometry of the robot as we do for the rotation, the translation along x and y axis has the origin of the reference frame in the robot starting position, so it is not useful to translate the local map.
    Yes, odometry only tracks the current pose of the robot relative to the odom frame which is the pose of the robot when it started usually (odom_pose_robot). You would use the GridSearchLocalizer or other global localizer to find map_pose_odom which lets you solve for map_pose_robot.
  • How can we obtain the position of the robot with reference to the global map?
    You can query the pose tree for the map_pose_robot transform directly assuming global localizer and your particle filter localizer are maintaining the transforms.

  • Is it a good idea to retrieve the robot starting position and sum the odometry? If yes, how can the starting position be retrieved, in a way which does not depend on the underlying simulator?
    Generally, no. Odometry drifts over time for any real robot and the error accumulates. Particle filter localizer uses scan matching to continually reanchor robot pose to the lidar returns it is seeing using odometry as a prior. This can cause localization “jumps” however but gives you an accurate pose.