Does ISAAC Gym support real2sim(NOT sim2real) like API?

Hi. I’m trying to set-up some hardware for RL on IsaacGym

I would like to connect SensorGlove & HTC Tracker 3.0 on IssacGym, and my goal is to reflect real finger movements & wrist position on the gym. What method should be used to implement this? Does sim2real provide these tools? Is there any real2sim like API?

if IsaacGym does not support it currently , what are the alternative options?(e.g. ROS…? but It seems that the IsaacGym does not support ROS yet…)

Thanks.

It should be doable, first you need to make the URDF or Mjcf file for the golve, it’s the most difficult part of the work!! URDFs are crazy complicated, if you already have it you are lucky! you can use that Allegro Hand example for start.

the rest is pretty straight forward unless you need to model the exact force and torques in human hand which I guess you don’t.

1 Like

Thank you for your answer.
But assuming that I have URDF file, exactly how can I make the IsaacGym follow my movements? I would appreciate it if you could provide examples or detailed documents dealing with related methods.

What’s the output of the glove? USB? Serial? does it have any Python API?

1 Like

I’m sorry for replying late.

Senseglove are connected to computer using a USB.

$ lsusb (or $ sudo usbview)
Microchip Technology, Inc. (formerly SMSC) USB 2.0 Hub

Also, Senseglove Inc provide ROS workspace, which supports Python3 & C++
https://github.com/Adjuvo/senseglove_ros_ws

Thankyou.

Actually it’s very simple,
First thing you need to read the glove’s output data in your code and put it in a np array, for example you have 10 joints and glove gives value of each joint, convert each joint value to radian then put it inside a np array:
targets = np.array([q1,q2,q3,…,q10]).astype(‘f’)

then set the Dof properties to position control and define the parameters, you have to play with PD parameters(stiffness and damping) to find the best ones:

props = gym.get_actor_dof_properties(env, actor_handle)
props[“driveMode”].fill(gymapi.DOF_MODE_POS)
props[“stiffness”].fill(1000.0)
props[“damping”].fill(200.0)
gym.set_actor_dof_properties(env, actor_handle, props)

finally on each step of the simulation, update the target values by reading from glove:
gym.set_actor_dof_position_targets(env, actor_handle, targets)

you have to play with dt and number of sub steps to get a real time simulation, finer dt would lead to slower simulation
sim_params.substeps = 1
sim_params.dt = 0.001

substeps are the number of times per step a position control algorithm is being updated

1 Like

Thank you for your contribution.

I am currently working on the URDF file, and I will tell you if I succeed.

Thank you.