How to code reinforcement learning for mobile robots

I have previously used the USD files in Isaac sim to assemble a USD mobile manipulator, and conducted some simulation experiments with this USD model. Now I want to use reinforcement learning to improve the motion defects in the simulation experiments. I looked at OIGE, Orbit, There is also Custom RL Example using Stable Baselines, but currently I don’t know which one to start with. I don’t know how to set up the robot parameters in Orbit and the yaml file for reinforcement learning in OIGE.

Hi there, OIGE is a set of reinforcement learning examples with a tasking framework that is built on top of Isaac Sim’s RL framework. OIGE uses a RL library rl_games to perform optimized parallel training with large number of environments. Orbit is a more research-focused community-driven robotics framework that uses Isaac Sim. It provides a different method of doing RL with a separate RL and tasking framework. We leave it up to the users to choose the framework that’s most suitable for your use case.

The custom RL example is a way to show users how to use isaac sim to perform RL directly with stable baselines3 without requiring additional tasking frameworks, such as OIGE and Orbit. However, it does not support vectorized parallel RL training.

For the OIGE side, you can reference yaml files from one of the existing examples that’s closest to your task and start from there. Try running with the existing parameters in the files and the values can be iterated based on the behaviours observed. More details of the config file parameters can also be found at 9.2. Creating a new RL Example in OmniIsaacGymEnvs — Omniverse IsaacSim latest documentation.