PPO Implementation


Where can I find the PPO implementation used in Isaac Gym?


There are several PPO implementations in the following website.

Where in the repo do you see a PPO implementation? Could you point me to a specific file?

Isaac Gym uses rl_games PPO, see one of the algorithms here:

another PPO is implemented here: rsl_rl/rsl_rl/algorithms at master · leggedrobotics/rsl_rl · GitHub
this is used in legged_gym, build on top of Isaac Gym


Hi @noshaba,

As @erwin.coumans posted we use rl-games: GitHub - Denys88/rl_games: RL implementations with all of our training environments in IsaacGymEnvs as well as in the Isaac Gym paper: [2108.10470] Isaac Gym: High Performance GPU-Based Physics Simulation For Robot Learning It referenced in the default setup.py installation script. In addition to the PPO it has high-performance vectorized zero-copy SAC implementation, and support of multi-agent and self-play scenarios.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.