Use different learning algorithms than PPO

Hey! Is it possible or could you provide more information about how to implement other learning algorithms like SAC for Isaac Gym? I think it’s straightforward to create your own environments, but I would also like to use different algorithms and/or use custom architectures for solving those Tasks. Thanks in advance!

1 Like

Hi!

I found someone was trying something similar on GitHub, Including SAC, imitation learning, etc.

That project does not appear to be using the latest version of Issac Gym, as it requires a GPU driver version that is below the current minimum requirements.

But mabye you can still learn how to add custom algorithms from it.

nice.
UNIVERSAL ROBOTS!

But debugging seems to be difficult
I wait for SAC to be added.

Thanks! I think that will help!

Hi

I would like to share the RL library we are using/developing in our lab

It includes PPO, SAC, DDPG and TD3 (more are coming) and allows to work with Isaac Gym (preview 2, preview 3) and OpenAI Gym environments…

In addition, its documentation is being worked on.
A first version already exists:

https://skrl.readthedocs.io/en/latest/index.html

The library is under continuous development. In the coming weeks we expect to have implementations of missing features, bug fixes as well more detailed documentation with examples.

Any feedback, bugs, features, etc, are more than welcome 😁

Here is an example for DDPG: ddpg_skrl_isaacgym.py (3.4 KB)

  • Isaac Gym (preview 3)

    python ddpg_skrl_isaacgym.py task=TASK_NAME
    
  • Isaac Gym (preview 2)

    python ddpg_skrl_isaacgym.py --task TASK_NAME
    
9 Likes

rl-games already supports SAC: GitHub - Denys88/rl_games: RL implementations and I successfully trained internally Ant and Humanoid environments using it. Might add a few examples in the one of the next releases.