Sim_to_real problem

Hello, I am a student. I have trained a robot within the Isaac Gym simulator using the Isaac Gym’s RL game library. Now, I am looking to apply this trained model to a real robot.

For this, I need to replicate the neural network architecture used in Isaac Gym’s RL game library on the robot and load the .pth file.

However, implementing the neural network architecture exactly as it is in Isaac Gym’s RL game library seems overly complex. Is there a way to accurately reproduce the same neural network architecture from Isaac Gym’s RL game library for real robot implementation?
Alternatively, are there other methods to effectively apply this trained model to a real robot?

Have you tried TensorRT?

Hi @jheyeo222

This post may help: Implementation method of the model learned with OmniIsaacGymEnvs for real robot - #2 by toni.nv

Thanks Toni, are you sure this example is a sim to real example or they have shared all the files?
In the notes it says: The checkpoints obtained in Isaac Gym were not evaluated with the real robot. However, they were evaluated in Omniverse Isaac Gym showing successful performance

So if they haven’t evaluated the checkpoints with the robot how the robot is working?

In the reaching_iiwa_real_ros_ros2_skrl_eval and two other eval files, there are codes for importing the checkpoint files:

load checkpoints

if control_space == “joint”:
agent.load(“./agent_joint.pt”)
elif control_space == “cartesian”:
agent.load(“./agent_cartesian.pt”)

but there is not much after, I think this project is a Hardware in the loop example, rather than a Sim2Real example, In a Sim2Real scenario I expect something like a classic control controllers.

@Denys88 has suggested this code:

Which works on google colab as I tested but I couldn’t implemented it on my computer as it needs Linux for envpool library.
I’m wondering if there is any other example for sim2real?

I had totally forgotten about legged_gym and Isaac Orbit!

https://docs.omniverse.nvidia.com/isaacsim/latest/isaac_gym_tutorials/ext_omni_isaac_orbit.html

Legged_gym is very complicated but has some features for sim2real applications and you have access to the details of the NN you are creating. Isaac Orbit is the new version of legged_gym and support lots of robotic arms as well. If you want to implement sim2real applications, Orbit should be the best option.

I have already tried using Legged Gym, and I found it to be an excellent resource for learning. I wasn’t very familiar with Orbit, but I will certainly give it a try.

yeah Orbit is the new version of Leeged-gym, here is a code sample from orbit standalone examples, it first load a pth file and convert it to onnx and the rest:

# load previously trained model
ppo_runner = OnPolicyRunner(env, agent_cfg.to_dict(), log_dir=None, device=agent_cfg.device)
ppo_runner.load(resume_path)
print(f"[INFO]: Loading model checkpoint from: {resume_path}")

# obtain the trained policy for inference
policy = ppo_runner.get_inference_policy(device=env.unwrapped.device)

# export policy to onnx
export_model_dir = os.path.join(os.path.dirname(resume_path), "exported")
export_policy_as_onnx(ppo_runner.alg.actor_critic, export_model_dir, filename="policy.onnx")

# reset environment
obs, _ = env.get_observations()
# simulate environment
while simulation_app.is_running():
    # run everything in inference mode
    with torch.inference_mode():
        # agent stepping
        actions = policy(obs)
        # env stepping
        obs, _, _, _ = env.step(actions)

# close the simulator
env.close()

and also you have access to the details of the NN:

policy = RslRlPpoActorCriticCfg(
init_noise_std=1.0,
actor_hidden_dims=[32, 32],
critic_hidden_dims=[32, 32],
activation=“elu”,
)
algorithm = RslRlPpoAlgorithmCfg(
value_loss_coef=1.0,
use_clipped_value_loss=True,
clip_param=0.2,
entropy_coef=0.005,
num_learning_epochs=5,
num_mini_batches=4,
learning_rate=1.0e-3,
schedule=“adaptive”,
gamma=0.99,
lam=0.95,
desired_kl=0.01,
max_grad_norm=1.0,

I have successfully applied Onnx to the basic examples provided in Isaac Gym. Now, I plan to experiment with it in both Orbit and leggedgym. Thank you for your helpful responses. I wish you success in all your endeavors.

no problem! you got lucky! Orbit just released their new version today!

adding a new robot/task is not straightforward though. There are 5 files that should be changed.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.