I use acquire_actor_root_state_tensor, acquire_net_contact_force_tensor, refresh_actor_root_state_tensor, refresh_net_contact_force_tensor in my own RL task。When the first iteration, everything is ok. However, once the robot collides with an object in the environment, the environment is reset. In the later training process, when refresh_ actor_ root_ state_tencer is called, root_ state is updated. But when I call refresh_ net_ contact_ force_tenor，force_tensor buffer is no longer updated.
I sincerely ask for your help！