Hello, everyone. I skimmed the forums and didn’t find anybody asking about this problem before.
What I’m trying to do is the following: I have an actor that is a quadruped robot (Laikago) and I have a set of reference motions that I want it to reproduce. The reference motions consists of a sequence of PD targets for each joint of the robot, e.g. the robot has 12 joints, so a 24 frame reference motion would have the shape (24x12). At each timestep I’m setting the PD targets of each joint to the current frame.
Now, here comes the problem. There are many different ways to go about setting the PD targets or even directly setting the state of the joints. This is what I have tried:
- Not using the Tensor API – With just one actor, directly calling
set_actor_dof_position_targets. This works fine.
- Using Tensor API on CPU to set PD targets – Setting
set_dof_position_target_tensor. This works fine.
- Using Tensor API on GPU to set PD targets – Setting
set_dof_position_target_tensor. This produces weird behavior.
- Using Tensor API on CPU to set the DOF states –
use_gpu_pipeline = Falseand using
set_dof_state_tensor. This works fine.
- Using Tensor API on GPU to set the DOF states –
set_dof_state_tensor. This produces weird behavior
The only differences between 2 vs 3 and 4 vs 5 is changing the value of
.cuda() on the tensors passed to
set_dof_state_tensor. For my application I’d rather use PD targets (as in 2 and 3), but I also checked the behavior of directly setting the DOF states and also found that it produced strange behavior.
Please check the reference video for observing the behavior (it’s a bit hard to describe it in words, haha): https://www.youtube.com/watch?v=FSJDoSJwkSE&
I’d be very happy if somebody could provide me some light on this matter. Thanks.