RuntimeError: normal expects all elements of std >= 0.0

I modified the kuka single-arm Regrasping task file and replaced it with a 6-DoF robotic arm and a dexterous hand with 5 fingers and 15-DoF. Modified the urdf loading resources, degree of freedom related parameters, finger and palm related Link parameters, etc. in tasks/allegro_kuka/ And successfully run isaac gym, loaded its own robotic arm and manipulator model, and can start the training process. However, during training for hundreds to thousands of epochs, the following errors may occur at random times. What may be the cause?

Traceback (most recent call last):
  File "", line 221, in <module>
  File "/home/chen/anaconda3/envs/rlgpu/lib/python3.7/site-packages/hydra/", line 52, in decorated_main
  File "/home/chen/anaconda3/envs/rlgpu/lib/python3.7/site-packages/hydra/_internal/", line 378, in _run_hydra
  File "/home/chen/anaconda3/envs/rlgpu/lib/python3.7/site-packages/hydra/_internal/", line 214, in run_and_report
    raise ex
  File "/home/chen/anaconda3/envs/rlgpu/lib/python3.7/site-packages/hydra/_internal/", line 211, in run_and_report
    return func()
  File "/home/chen/anaconda3/envs/rlgpu/lib/python3.7/site-packages/hydra/_internal/", line 381, in <lambda>
  File "/home/chen/anaconda3/envs/rlgpu/lib/python3.7/site-packages/hydra/_internal/", line 111, in run
    _ = ret.return_value
  File "/home/chen/anaconda3/envs/rlgpu/lib/python3.7/site-packages/hydra/core/", line 233, in return_value
    raise self._return_value
  File "/home/chen/anaconda3/envs/rlgpu/lib/python3.7/site-packages/hydra/core/", line 160, in run_job
    ret.return_value = task_function(task_cfg)
  File "", line 216, in launch_rlg_hydra
    'sigma': cfg.sigma if cfg.sigma != '' else None
  File "/home/chen/anaconda3/envs/rlgpu/lib/python3.7/site-packages/rl_games/", line 121, in run
  File "/home/chen/anaconda3/envs/rlgpu/lib/python3.7/site-packages/rl_games/", line 102, in run_train
  File "/home/chen/anaconda3/envs/rlgpu/lib/python3.7/site-packages/rl_games/common/", line 1226, in train
    step_time, play_time, update_time, sum_time, a_losses, c_losses, b_losses, entropies, kls, last_lr, lr_mul = self.train_epoch()
  File "/home/chen/anaconda3/envs/rlgpu/lib/python3.7/site-packages/rl_games/common/", line 1088, in train_epoch
    batch_dict = self.play_steps_rnn()
  File "/home/chen/anaconda3/envs/rlgpu/lib/python3.7/site-packages/rl_games/common/", line 732, in play_steps_rnn
    res_dict = self.get_action_values(self.obs)
  File "/home/chen/anaconda3/envs/rlgpu/lib/python3.7/site-packages/rl_games/common/", line 385, in get_action_values
    res_dict = self.model(input_dict)
  File "/home/chen/anaconda3/envs/rlgpu/lib/python3.7/site-packages/torch/nn/modules/", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/chen/anaconda3/envs/rlgpu/lib/python3.7/site-packages/rl_games/algos_torch/", line 263, in forward
    selected_action = distr.sample()
  File "/home/chen/anaconda3/envs/rlgpu/lib/python3.7/site-packages/torch/distributions/", line 65, in sample
    return torch.normal(self.loc.expand(shape), self.scale.expand(shape))
RuntimeError: normal expects all elements of std >= 0.0

The problem is that the fingertip name in the Python training file is inconsistent with the name in the URDF, so the handle of the fingertip cannot be found and the rewards cannot be calculated.