Inaccuracy of DOF state readings

Hello, there.

Intent: I would like to use feedback from measured angular velocity to regulate torque actuation.

Problem: I’ve noticed that DOF state readings are inconsistent and also somewhat affected by simulation parameters:

  • When the joint/body is idle, (angular) velocities for it are reported with offsets, which are large and consistent enough (e.g. 0.1 rad/s) that they should correspond to noticeable movement, but the joint/body seems fine (idle, as expected).
  • When the joint/body is and appears to be moving consistently, the reported (angular) velocity noisily covers a range between e.g. 4 and 12 rad/s. Skidding might play a part, but the visual effect doesn’t align with this (seems fine, i.e. consistent and my guess would be closer to 4 for this case). Greater velocities seem to give more stable readings.

Question: Should the DOF state tensor be this inaccurate, while the simulation runs fine? How could I get more reliable (angular) velocity readings?

Thank you for your time and responses.

Some noise is expected even when the bodies are resting. For example, when gravity is acting on the bodies, the solver needs to apply impulses to prevent interpenetration or enforce joint limit constraints. This could account for some of the high frequency, low amplitude noise we see in your plots. An exponentially weighted moving average could help smooth it out.

I’m assuming that you’re using PhysX with the tensor API? Could you please try with different devices (CPU and GPU) and compare the results? You could try the following:

CPU pipeline with CPU simulation:

    sim_params.use_gpu_pipeline = False
    sim_params.physx.use_gpu = False

CPU pipeline with GPU simulation:

    sim_params.use_gpu_pipeline = False
    sim_params.physx.use_gpu = True

GPU pipeline with GPU simulation:

    sim_params.use_gpu_pipeline = True
    sim_params.physx.use_gpu = True

If you could provide a bit more detail about the simulation, we may be able to suggest more targeted solutions.


Another thing to try is finite differencing the positions. That should be even easier/faster than EWMA and won’t introduce lag.

The velocities reported by the solver might be noisy due to internal substepping in the physics engine, which is something that we can try to rectify in the future. But in the meantime, please try the finite differencing or EWMA if the velocities reported by the solver are too noisy.

But please let us know if you find a noticeable difference between CPU and GPU results, if you have a chance to try it!

I understand that some noise is unavoidable, but the magnitude here was quite surprising (even more so with the simulation otherwise appearing fine) and probably problematic for sim-to-real transfer (real-life sensors have noise too, of course, but I would have preferred to control this in the simulation myself).

I tested the CPU/GPU combinations, as suggested. Some details before the comparison:

  • the DOFs here refer to wheels on a 4-wheeled robot (with each wheel independently actuated),
  • there are multiple robots in the simulation, of which one was controlled by me (4 DOFs moving back and forth at two different torque magnitudes) and the rest were idle (4x7 idle DOFs),
  • for now, the terrain is a simple plane, but more complex, sloped terrain might be tested in the future,
  • PhysX backend is used for simulation and the tensor API is used for reading the DOF state tensor and setting the DOF actuation tensor.

In the images from the original post, the simulation and pipeline were both on CPU. Here is the comparison:


  • the idle plots are cut off at the beginning due to an initial a spike which ruined the scale of the plot (values went to about 1.5 rad/s as the wheels settled, though the visual effect did not seem that extreme),
  • the magnitude of the noise while idle seems a bit smaller with CPU simulation,
  • the noise patterns while idle seem identical on repeated runs with GPU simulation.

I wanted to avoid introducing latency with EWMA, but differencing the positions turned out great! Here is the comparison:


  • noise is greatly reduced,
  • no offsets are reported when idle,
  • noise patterns on repeated GPU runs remain consistent.

All in all, this aligns much better with the visual results and is sufficient for my use case. Thank you for providing the solution!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.