SKRL: a modular reinforcement learning library with support for NVIDIA Omniverse Isaac Gym

Dear community,

skrl is an open-source modular library for Reinforcement Learning written in Python (using PyTorch) and designed with a focus on readability, simplicity, and transparency of algorithm implementation. In addition to supporting the OpenAI Gym and DeepMind environment interfaces, it allows loading and configuring NVIDIA Isaac Gym and NVIDIA Omniverse Isaac Gym environments, enabling agents’ simultaneous training by scopes (subsets of environments among all available environments), which may or may not share resources, in the same run

Please, visit the documentation for usage details and examples

https://skrl.readthedocs.io/en/latest/


The current version 0.6.0 is now available (it is under active development. Bug detection and/or correction, feature requests and everything else are more than welcome: Open a new issue on GitHub! ). Please refresh your browser (Ctrl + Shift + R or Ctrl + F5) if the documentation is not displayed correctly

This new version has focused on supporting the training and evaluation of reinforcement learning algorithms in NVIDIA Omniverse Isaac Gym

Added

  • Omniverse Isaac Gym environment loader
  • Wrap an Omniverse Isaac Gym environment
  • Save the best models during training
  • Omniverse Isaac Gym examples
2 Likes

Dear community,

skrl version 0.7.0 is now available (it is under active development. Bug detection and/or correction, feature requests and everything else are more than welcome: Open a new issue on GitHub! ). Please refresh your browser (Ctrl + Shift + R or Ctrl + F5) if the documentation is not displayed correctly

Added

  • A2C agent
  • Isaac Gym (preview 4) environment loader
  • Wrap an Isaac Gym (preview 4) environment
  • Support for OpenAI Gym vectorized environments
  • Running standard scaler for input preprocessing
  • Installation from PyPI (pip install skrl)

Now, with the implementation of the standard scaler (adapted from rl_games), better performance is achieved…

E.g, for the Ant environment

[ORANGE]: PPO agent with input preprocessor
[BLUE] PPO agent without input preprocessors

Exciting work, thanks!
Can you add some algorithms and environments for multi-agent reinforcement learning?

Hi @Mr.Fox

Thank you for giving the library a try.

The idea is to add more algorithms to the library gradually :)

Regarding environments, the development is focused on adding algorithms and functionalities to work with the relevant environments and interfaces in the RL field (such as Omniverse Isaac Gym, Isaac Gym, OpenAI Gym, or DeepMind, for example). Creating or providing environments as part of the library is not on my roadmap at the moment.

Dear community,

skrl version 0.8.0 is now available (it is under active development. Bug detection and/or correction, feature requests, and everything else is more than welcome: Open a new issue on GitHub! ). Please refresh your browser (Ctrl + Shift + R or Ctrl + F5) if the documentation is not displayed correctly

Added

  • AMP agent for physics-based character animation
  • Manual trainer
  • Gaussian model mixin
  • Support for creating shared models
  • Parameter role to model methods
  • Wrapper compatibility with the new OpenAI Gym environment API (by @JohannLange)
  • Internal library colored logger
  • Migrate checkpoints/models from other RL libraries to skrl models/agents
  • Configuration parameter store_separately to agent configuration dict
  • Save/load agent modules (models, optimizers, preprocessors)
  • Set random seed and configure deterministic behavior for reproducibility
  • Benchmark results for Isaac Gym and Omniverse Isaac Gym on the GitHub discussion page
  • Franka Emika real-world example

Changed

  • Models implementation as Python mixin [breaking change]
  • Multivariate Gaussian model (GaussianModel until 0.7.0) to MultivariateGaussianMixin
  • Trainer’s cfg parameter position and default values
  • Show training/evaluation display progress using tqdm (by @JohannLange)
  • Update Isaac Gym and Omniverse Isaac Gym examples

Fixed

  • Missing recursive arguments during model weights initialization
  • Tensor dimension when computing preprocessor parallel variance
  • Models’ clip tensors dtype to float32

Removed

  • Parameter inference from model methods
  • Configuration parameter checkpoint_policy_only from agent configuration dict

As a showcase for the basic Franka Emika real-world example, a simulated version of the environment for Isaac Gym and Omniverse Isaac Gym are provided to support advanced implementations :)



1 Like