In a current robotic project, we are using the ParicleFilterLocalization from the navigation package. This gives the pose of the robot to world frame in a known map and is written to the pose tree. This is accessible and successfully used in our own path planning algorithm. My questions are:
- Is this pose tree purely based on the ParticleFilterLocalization component, or is there a motion model in the background writing to the pose tree as well (a.k.a. the prediction of our pose)?
If not, then we will add the motion model and write to the pose tree, but we just need to know this answer.
- Does their exist an uncertainty matrix for the robot pose when using the ParticleFilterLocalization, or anywhere in the Isaac SDK, and how can it be accessed? Or is the ParticleFilterLocalization taken as absolute truth? That would make it tough to integrate other sensors and codelets for them that we write since the lidar would end up always trumping any of our other measurements.
I would like to read and write to this matrix as more sensors are integrated into the system for the best estimate of the pose as well as certainty of that pose.