Get Torques in Position Control Mode

When using position drive mode for a DOF, I don’t have access to the final torques calculated and applied to the joints. But if the built in PD controller generates torques as its final output to the simulation, I wonder if there is a way to access these calculated torques to use as part of the reward. Does enable_actor_dof_force_sensors() on initialization followed by acquire_dof_force_tensor() and refresh_dof_force_tensor() work in this way, or what is the recommended way of doing something like this?

I second this question.

Based on my tests force sensors return total amount of acting torque on a joint, it means both gravity and actuator torques! check this graph:

I’m trying to hold a single link arm horizontally against gravity, the blue one is the dof position which converges to pi/2 and the orange one is the amount of torque which is almost zero!! which means it’s not the actuator’s torque though the total amount of torque acting on that joint!

1 Like

That’s a good way to test this, hadn’t thought of that – thanks!

I’m guessing for getting the actuator’s output first we should calculate the mass matrix. I need to do some tests before confirming if it works.

@Abi.Ghafari Do you think you could share your code for this? Would love to do some tests with it by disabling or enabling force sensor property options.

it takes a few hours to prepare it as my design includes some other stuff as well, if I find the time this week I will share the code + URDF

Ok, which force sensors did you use though? The handle/asset force sensors or the force sensor tensor API?

I’m not using tensor API here:

After creating the actor:
ahandle = gym.create_actor(env, asset, pose, “actor”, i, 1)

You just need to activate the force sensors:
gym.enable_actor_dof_force_sensors(envs[0], handles[0])

then inside the while loop you just read the force sensor:
efforts = gym.get_actor_dof_forces(envs[0], handles[0])
it return measured force at all the joints of the actor.

Have you tried using

gym.refresh_dof_force_tensor(sim)
gym.acquire_dof_force_tensor(sim)

instead? These don’t seem to give the total amount of acting torques on a joint… But they still don’t seem to be necessarily correct when I try various degrees of freedom on a shoulder joint for example…

Ok, I just tried your version. I actually get the same results like when using the tensor API… But at some arm rotations it’s not giving me 0 but even something like -2125. When holding the arm horizontal I also get a non-zero result…

not yet, what do you get exactly? the value of the force sensors are directly related to the dof acceleration, if the joint is at steady state or moving with constant speed, the dof sensor value should be around zero.

Do you have friction in your joint? if you have friction, the force should compensate for friction as well?

I also tried to use mass matrix to calculate the force due to the gravity but it needs more research. Though regarding calculation of actuator’s torque in the speed and position mode, one way is to just backpropagate from the output of the PID controller: effort= stiffness*(d_theta) + damping*velocity

So when the arm is held horizontally to the side it’s a force of -7. If it is in front then -1. If I hold the arm up by rotating the x-axis the force is -2125, if I hold it up by turning the y-axis it’s 0.0. I’m using the joint_monkey.py with the amp_humoid.xml for debugging…

could you post the pictures for each states? that 2125 doesn’t look correct!

Hi @kstorey , do you have any idea why force sensors some times not working as expected? is our understanding of force sensors even correct? Thanks

I have to preface that I work on the underlying PhysX simulation but I don’t have a massive amount of knowledge about Gym.

The underlying force sensors in PhysX have flags that can pick up on the forward dynamics forces (gravity, coriolis, centrifugal forces etc.) and the solver forces (drives, contacts, limits etc.). What these sensors report depends on the flags provided - if you request both forward dynamics and constraint forces and the arm is holding steady, the reported net force should be approximately zero (as you showed in one of your graphs). If you request just the forward dynamics forces or just the constraint forces in this case, you should get non-zero values that cancel each other out.

However, the forces reported by constraint force sensors may not be just the drive forces, because they will also potentially pick up on joint limit forces, any contacts etc. and aggregate them into a single either joint force or 6D wrench (the sensors can produce either). We plan on adding more detailed filtering of constraint forces in the near future.

I’m not sure why you are getting values that seem unusually high/low but if you have specific model and repro steps, we can investigate.

2 Likes

Hi @Abi.Ghafari and @kstorey ,

Regarding reproduction: I tried out the joint_monkey.py demo in Isaac Gym to do some joint force tests. Granted I am not using the joint position controller there but the default effort mode provided. The environment works in such a way that one DOF of a limb at a time is moved to its limits. When the arm is rotated to the maximum x-value and it points up I get as force value the large value described above. When I do the same in the y-direction, the arm is in the same position but the value is 0. These are very confusing. Here is my code (It is essentially the original joint_monkey.py code, except that I am using the AMP skeleton and only looking at one joint at a time and not switching between the DOFs, and printing the acting forces on the DOFs) :

"""
Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.

NVIDIA CORPORATION and its licensors retain all intellectual property
and proprietary rights in and to this software, related documentation
and any modifications thereto. Any use, reproduction, disclosure or
distribution of this software and related documentation without an express
license agreement from NVIDIA CORPORATION is strictly prohibited.

Joint Monkey
------------
- Animates degree-of-freedom ranges for a given asset.
- Demonstrates usage of DOF properties and states.
- Demonstrates line drawing utilities to visualize DOF frames (origin and axis).
"""

import math
import numpy as np
from isaacgym import gymapi, gymutil, gymtorch
import torch


def clamp(x, min_value, max_value):
    return max(min(x, max_value), min_value)

# simple asset descriptor for selecting from a list


class AssetDesc:
    def __init__(self, file_name, flip_visual_attachments=False):
        self.file_name = file_name
        self.flip_visual_attachments = flip_visual_attachments


asset_descriptors = [
    AssetDesc("mjcf/amp_humanoid.xml", False),
    AssetDesc("mjcf/nv_humanoid.xml", False),
    AssetDesc("mjcf/nv_ant.xml", False),
    AssetDesc("urdf/cartpole.urdf", False),
    AssetDesc("urdf/sektion_cabinet_model/urdf/sektion_cabinet.urdf", False),
    AssetDesc("urdf/franka_description/robots/franka_panda.urdf", True),
    AssetDesc("urdf/kinova_description/urdf/kinova.urdf", False),
    AssetDesc("urdf/anymal_b_simple_description/urdf/anymal.urdf", True),
]


# parse arguments
args = gymutil.parse_arguments(
    description="Joint monkey: Animate degree-of-freedom ranges",
    custom_parameters=[
        {"name": "--asset_id", "type": int, "default": 0, "help": "Asset id (0 - %d)" % (len(asset_descriptors) - 1)},
        {"name": "--speed_scale", "type": float, "default": 1.0, "help": "Animation speed scale"},
        {"name": "--show_axis", "action": "store_true", "help": "Visualize DOF axis"}])

if args.asset_id < 0 or args.asset_id >= len(asset_descriptors):
    print("*** Invalid asset_id specified.  Valid range is 0 to %d" % (len(asset_descriptors) - 1))
    quit()


# initialize gym
gym = gymapi.acquire_gym()

# configure sim
sim_params = gymapi.SimParams()
sim_params.dt = dt = 1.0 / 60.0
sim_params.gravity = gymapi.Vec3(0.0, -9.81, 0.0)

if args.physics_engine == gymapi.SIM_FLEX:
    pass
elif args.physics_engine == gymapi.SIM_PHYSX:
    sim_params.physx.solver_type = 1
    sim_params.physx.num_position_iterations = 6
    sim_params.physx.num_velocity_iterations = 0
    sim_params.physx.num_threads = args.num_threads
    sim_params.physx.use_gpu = args.use_gpu

sim_params.use_gpu_pipeline = False
if args.use_gpu_pipeline:
    print("WARNING: Forcing CPU pipeline.")

sim = gym.create_sim(args.compute_device_id, args.graphics_device_id, args.physics_engine, sim_params)
if sim is None:
    print("*** Failed to create sim")
    quit()

# add ground plane
plane_params = gymapi.PlaneParams()
gym.add_ground(sim, plane_params)

# create viewer
viewer = gym.create_viewer(sim, gymapi.CameraProperties())
if viewer is None:
    print("*** Failed to create viewer")
    quit()

# load asset
asset_root = "../../assets"
asset_file = asset_descriptors[args.asset_id].file_name

asset_options = gymapi.AssetOptions()
asset_options.fix_base_link = True
asset_options.flip_visual_attachments = asset_descriptors[args.asset_id].flip_visual_attachments
asset_options.use_mesh_materials = True

print("Loading asset '%s' from '%s'" % (asset_file, asset_root))
asset = gym.load_asset(sim, asset_root, asset_file, asset_options)

# get array of DOF names
dof_names = gym.get_asset_dof_names(asset)

# get array of DOF properties
dof_props = gym.get_asset_dof_properties(asset)

# create an array of DOF states that will be used to update the actors
num_dofs = gym.get_asset_dof_count(asset)
dof_states = np.zeros(num_dofs, dtype=gymapi.DofState.dtype)

# get list of DOF types
dof_types = [gym.get_asset_dof_type(asset, i) for i in range(num_dofs)]

# get the position slice of the DOF state array
dof_positions = dof_states['pos']

# get the limit-related slices of the DOF properties array
stiffnesses = dof_props['stiffness']
dampings = dof_props['damping']
armatures = dof_props['armature']
has_limits = dof_props['hasLimits']
lower_limits = dof_props['lower']
upper_limits = dof_props['upper']

# initialize default positions, limits, and speeds (make sure they are in reasonable ranges)
defaults = np.zeros(num_dofs)
speeds = np.zeros(num_dofs)
for i in range(num_dofs):
    if has_limits[i]:
        if dof_types[i] == gymapi.DOF_ROTATION:
            lower_limits[i] = clamp(lower_limits[i], -math.pi, math.pi)
            upper_limits[i] = clamp(upper_limits[i], -math.pi, math.pi)
        # make sure our default position is in range
        if lower_limits[i] > 0.0:
            defaults[i] = lower_limits[i]
        elif upper_limits[i] < 0.0:
            defaults[i] = upper_limits[i]
    else:
        # set reasonable animation limits for unlimited joints
        if dof_types[i] == gymapi.DOF_ROTATION:
            # unlimited revolute joint
            lower_limits[i] = -math.pi
            upper_limits[i] = math.pi
        elif dof_types[i] == gymapi.DOF_TRANSLATION:
            # unlimited prismatic joint
            lower_limits[i] = -1.0
            upper_limits[i] = 1.0
    # set DOF position to default
    dof_positions[i] = defaults[i]
    # set speed depending on DOF type and range of motion
    if dof_types[i] == gymapi.DOF_ROTATION:
        speeds[i] = args.speed_scale * clamp(2 * (upper_limits[i] - lower_limits[i]), 0.25 * math.pi, 3.0 * math.pi)
    else:
        speeds[i] = args.speed_scale * clamp(2 * (upper_limits[i] - lower_limits[i]), 0.1, 7.0)

# Print DOF properties
for i in range(num_dofs):
    print("DOF %d" % i)
    print("  Name:     '%s'" % dof_names[i])
    print("  Type:     %s" % gym.get_dof_type_string(dof_types[i]))
    print("  Stiffness:  %r" % stiffnesses[i])
    print("  Damping:  %r" % dampings[i])
    print("  Armature:  %r" % armatures[i])
    print("  Limited?  %r" % has_limits[i])
    if has_limits[i]:
        print("    Lower   %f" % lower_limits[i])
        print("    Upper   %f" % upper_limits[i])

# set up the env grid
num_envs = 36
num_per_row = 6
spacing = 2.5
env_lower = gymapi.Vec3(-spacing, 0.0, -spacing)
env_upper = gymapi.Vec3(spacing, spacing, spacing)

# position the camera
cam_pos = gymapi.Vec3(17.2, 2.0, 16)
cam_target = gymapi.Vec3(5, -2.5, 13)
gym.viewer_camera_look_at(viewer, None, cam_pos, cam_target)

# cache useful handles
envs = []
actor_handles = []

print("Creating %d environments" % num_envs)
for i in range(num_envs):
    # create env
    env = gym.create_env(sim, env_lower, env_upper, num_per_row)
    envs.append(env)

    # add actor
    pose = gymapi.Transform()
    pose.p = gymapi.Vec3(0.0, 1.32, 0.0)
    pose.r = gymapi.Quat(-0.707107, 0.0, 0.0, 0.707107)

    actor_handle = gym.create_actor(env, asset, pose, "actor", i, 1)
    actor_handles.append(actor_handle)

    # set default DOF positions
    gym.set_actor_dof_states(env, actor_handle, dof_states, gymapi.STATE_ALL)

    gym.enable_actor_dof_force_sensors(env, actor_handle)

# joint animation states
ANIM_SEEK_LOWER = 1
ANIM_SEEK_UPPER = 2
ANIM_SEEK_DEFAULT = 3
ANIM_FINISHED = 4

# initialize animation state
anim_state = ANIM_SEEK_LOWER
current_dof = 6
print("Animating DOF %d ('%s')" % (current_dof, dof_names[current_dof]))

while not gym.query_viewer_has_closed(viewer):

    # step the physics
    gym.simulate(sim)
    gym.fetch_results(sim, True)

    speed = speeds[current_dof]

    # animate the dofs
    if anim_state == ANIM_SEEK_LOWER:
        dof_positions[current_dof] -= speed * dt
        if dof_positions[current_dof] <= lower_limits[current_dof]:
            dof_positions[current_dof] = lower_limits[current_dof]
            anim_state = ANIM_SEEK_UPPER
    elif anim_state == ANIM_SEEK_UPPER:
        dof_positions[current_dof] += speed * dt
        if dof_positions[current_dof] >= upper_limits[current_dof]:
            dof_positions[current_dof] = upper_limits[current_dof]
            anim_state = ANIM_SEEK_DEFAULT
    if anim_state == ANIM_SEEK_DEFAULT:
        dof_positions[current_dof] -= speed * dt
        if dof_positions[current_dof] <= defaults[current_dof]:
            dof_positions[current_dof] = defaults[current_dof]
            anim_state = ANIM_FINISHED
    elif anim_state == ANIM_FINISHED:
        dof_positions[current_dof] = defaults[current_dof]
        current_dof = (current_dof + 1) % num_dofs
        anim_state = ANIM_SEEK_LOWER
        print("Animating DOF %d ('%s')" % (current_dof, dof_names[current_dof]))

    dof_positions[current_dof] = lower_limits[current_dof] * 1

    if args.show_axis:
        gym.clear_lines(viewer)

    # clone actor state in all of the environments
    for i in range(num_envs):
        gym.set_actor_dof_states(envs[i], actor_handles[i], dof_states, gymapi.STATE_POS)

        if args.show_axis:
            # get the DOF frame (origin and axis)
            dof_handle = gym.get_actor_dof_handle(envs[i], actor_handles[i], current_dof)
            frame = gym.get_dof_frame(envs[i], dof_handle)

            # draw a line from DOF origin along the DOF axis
            p1 = frame.origin
            p2 = frame.origin + frame.axis * 0.7
            color = gymapi.Vec3(1.0, 0.0, 0.0)
            gymutil.draw_line(p1, p2, color, gym, viewer, envs[i])

    # update the viewer
    gym.step_graphics(sim)
    gym.draw_viewer(viewer, sim, True)

    # Wait for dt to elapse in real time.
    # This synchronizes the physics simulation with the rendering rate.
    gym.sync_frame_time(sim)

    gym.refresh_dof_force_tensor(sim)
    #state_tensor = gymtorch.wrap_tensor(gym.acquire_dof_force_tensor(sim)).view(num_envs, num_dofs)
    state_tensor = gym.get_actor_dof_forces(envs[0], actor_handles[0])
    print(current_dof)
    #print(torch.round(state_tensor[0, current_dof]))
    #print(torch.round(state_tensor[0, :]))
    print(np.round(state_tensor[current_dof]))
    print(np.round(state_tensor))

print("Done")

gym.destroy_viewer(viewer)
gym.destroy_sim(sim)

Thanks @kstorey, I understand and appreciate you are answering our questions about Isaac Gym.
So if we had access to all the features of PhysX from Isaac Gym we could choose between constraint forces and forward dynamic forces. It would be very helpful!
I’m wondering if PhysX is going to be fully accessible through gym in the future, or it’s going to be indirect access like what we currently have? I’m asking because I need to know if we should start learning PhysX along with gym or not :)
Thanks!