How to train multiple environments with RL in one scene at the same time?

Hello! I tried change the value of ‘self.num_envs’ in the example ‘’.But it didn’t work.

Traceback (most recent call last):
  File "", line 26, in <module>
  File "/media/lyc/files/U_omniverse/pkg/isaac_sim-2022.1.0/kit/python/lib/python3.7/site-packages/stable_baselines3/ppo/", line 313, in learn
  File "/media/lyc/files/U_omniverse/pkg/isaac_sim-2022.1.0/kit/python/lib/python3.7/site-packages/stable_baselines3/common/", line 243, in learn
    total_timesteps, eval_env, callback, eval_freq, n_eval_episodes, eval_log_path, reset_num_timesteps, tb_log_name
  File "/media/lyc/files/U_omniverse/pkg/isaac_sim-2022.1.0/kit/python/lib/python3.7/site-packages/stable_baselines3/common/", line 429, in _setup_learn
    self._last_obs = self.env.reset()  # pytype: disable=annotation-type-mismatch
  File "/media/lyc/files/U_omniverse/pkg/isaac_sim-2022.1.0/kit/python/lib/python3.7/site-packages/stable_baselines3/common/vec_env/", line 61, in reset
    obs = self.envs[env_idx].reset()
  File "/media/lyc/files/U_omniverse/pkg/isaac_sim-2022.1.0/kit/python/lib/python3.7/site-packages/stable_baselines3/common/", line 79, in reset
    return self.env.reset(**kwargs)
  File "/media/lyc/files/U_omniverse/pkg/isaac_sim-2022.1.0/exts/omni.isaac.gym/omni/isaac/gym/vec_env/", line 130, in reset
  File "/media/lyc/files/U_omniverse/pkg/isaac_sim-2022.1.0/", line 83, in reset
    self._cartpoles.set_joint_positions(dof_pos, indices=indices)
  File "/media/lyc/files/U_omniverse/pkg/isaac_sim-2022.1.0/exts/omni.isaac.core/omni/isaac/core/articulations/", line 234, in set_joint_positions
    positions, device=self._device
IndexError: index 1 is out of bounds for dimension 0 with size 1
./ line 46: 19321 Segmentation fault      (core dumped) $python_exe "$@" $args
There was an error running python

Are there any examples for multiple environments training for Issac gym in Issac sim?I think the documentation for it is not detailed enough.
I used to use Isaac Gym package,while it can’t import usd.I think they should have different API.

HI @1324539608

Have you checked the 4. Running External Reinforcement Learning Examples — Omniverse Robotics documentation?

Hi @1324539608

Please take a look at 4. Running External Reinforcement Learning Examples — Omniverse Robotics documentation and GitHub - NVIDIA-Omniverse/OmniIsaacGymEnvs: Reinforcement Learning Environments for Omniverse Isaac Gym for documentation and examples of running multiple environments. The workflow is very similar to the Isaac Gym package.