I am currently working with the ‘differential_base_navigation’ subgraph example, with an ultimate aim to substitute the planner subgraph with a reinforcement learning based route planner.
To get started I thought I would write a node to plan a simple route across an empty space (an empty room made in Unity for now). I can see the existing planner outputs Plan2Proto messages, and receives Goal2Proto and Plan2Proto messages which is fine. However, I’m struggling to understand where the current robot pose comes from (I guess in the map frame)? I’d love to see some examples that deal with current robot pose.
Is there a good example or tutorial with source code that works with a wheeled robot rather than the camera examples in the SDK documentation?