Multiple environments (agents) using Stable-baselines3

Hi,
I’m trying to extend the cartpole example (6. Custom RL Example using Stable Baselines — Omniverse Robotics documentation) to multiple cartpoles. I used a cloner but the problem is that the observations are now (num_envs, obs_space) when stable-baselines expects observations of size (obs_space).

Is there something I’m missing? Can you use sb-3 implementations with multiple environments/agents or is this not possible at the moment?

Hi there, it is possible to use sb-3 with multiple environments, but it will require you to extend the VecEnv class to work with the sb-3 vectorized environments wrapper class. Since sb-3 does not necessarily provide the best performance with vectorized environments, we provide a vectorized implementation using the rl-games library instead, which can be found in this repo: GitHub - NVIDIA-Omniverse/OmniIsaacGymEnvs: Reinforcement Learning Environments for Omniverse Isaac Gym. You can also reference this thread for other options: Unable to train multi environment robot.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.