Multiple environments (agents) using Stable-baselines3

Hi there, it is possible to use sb-3 with multiple environments, but it will require you to extend the VecEnv class to work with the sb-3 vectorized environments wrapper class. Since sb-3 does not necessarily provide the best performance with vectorized environments, we provide a vectorized implementation using the rl-games library instead, which can be found in this repo: GitHub - NVIDIA-Omniverse/OmniIsaacGymEnvs: Reinforcement Learning Environments for Omniverse Isaac Gym. You can also reference this thread for other options: Unable to train multi environment robot.