Hi, I have been trying to intergrate my own reinforcement learning algorithm into the Ant task and test it. However, I found it super challenging, because in your example from Github, you are using a half-completed package called rl_games. The package is poorly organized and has no comments or documents in it, which makes it almost impossible to understand. I understand you chose the package because it runs really fast. But can you (dev team) publish an easy-to-use example for AI developers who value the algorithm itself and correctness over efficiency? Not every user is working for NVIDIA afterall.
My current biggest question is, why do we need to register a RLGPUEnvs (from rl_games) when the environment related to my target task is defined elsewhere? And what does an observer do? And additionally, I can’t find “features” in your configurations, so why do you reference “features” in your AMPAgent?
skrl is an open-source modular library for Reinforcement Learning written in Python (using PyTorch) and designed with a focus on readability, simplicity, and transparency of algorithm implementation. In addition to supporting the OpenAI Gym / Farama Gymnasium and DeepMind and other environment interfaces, it allows loading and configuring NVIDIA Isaac Gym, NVIDIA Isaac Orbit and NVIDIA Omniverse Isaac Gym environments, enabling agents’ simultaneous training by scopes (subsets of environments among all available environments), which may or may not share resources, in the same run.
Visit is comprehensive documentation to get started :)
Many thanks! Just had a quick glance at the codes and documents. SKRL is a lot easier to read and fits my coding style perfectly. And supporting Isaac Orbit is a pleasant surprise. There shouldn’t be too much work to convert my RL code into their style. Thanks again for your suggestion!