RL_games SAC

Has anyone implemented the rl_games version of SAC in Isaac Gym? If yes, any code to share?

Hi @sesem738,

SAC is already supported, there are public examples: IsaacGymEnvs/rl_examples.md at main · NVIDIA-Omniverse/IsaacGymEnvs · GitHub

For example one of the training configs: IsaacGymEnvs/AntSAC.yaml at main · NVIDIA-Omniverse/IsaacGymEnvs · GitHub

I want to use a custom network for the SAC model. How do I go about it?

You can check out the network_builder.py under rl_games/algos_torch.
At the bottom is the SACbuilder function which is referenced by IsaacGym. You can use this function to build your custom network.
You can specify conv2d/conv1d or mlp on your CustomEnvSAC.yaml file under cfg/train. Just follow along the logic there.

for instance, for your conv2d custom network with 2 layers, you can write it like this on YAML file.


conv_network:
type: conv2d
convs: [ {filters: 5, kernel_size:6, strides:1, padding:0}, {filters:10, kernel_size:6, strides:1, padding:0} ]
activation: relu
normalization: batch_norm
initializer:
name: default


Obviously, you have to add something like self.feature_extractor = params[‘conv_network’] at the load method under SACbuilder function.

If you want make things unique, you can copy this SACbuilder function to your python file, make some changes, along with the function name and register this network via
“model_builder.register_network()” at the train.py of IsaacGym.

At the moment, rl_game does not support multi_gpu support for SAC agent. Only PPO agent can be trained/inferenced via multi_gpu distributed workers with the default codes.
However, you can make minimal changes to the SAC agent function and give it multi_gpu support as well