Run inference on Policy trained with Omni Isaac Gym

Hello,

please take a look at this website: (https://docs.omniverse.nvidia.com/isaacsim/latest/tutorial_gym_new_rl_example.html)

In the code, right above the sentence “We should see our cartpole policy running!”, inference is being run on the trained neural network in the line:

> action, _states = model.predict(obs)

I am using GitHub - NVIDIA-Omniverse/OmniIsaacGymEnvs: Reinforcement Learning Environments for Omniverse Isaac Gym to train a neural network, which does not use Stable Baselines. Instead it uses: GitHub - Denys88/rl_games: RL implementations.

I have not found a way to run inference manually on a neural network that was trained with this framework. I know that you can run inference by setting test=True in the command line and giving a checkpoint, however I want to do in similar to the way it is in the code above, so in the code itself .The reason I want to do that is because I want to infer different neural networks in dependence of the current state of the environment. Ideally I will have for example 3 neural networks, model_0, model_1, model_2, so that they can be inferred in the same program.

Hi @barandemirbd2000

An alternative could be to use skrl library (see the skrl’s examples for OmniIsaacGymEnvs (OIGE)) and manually control the evaluation as described in this post: Deploy a trained PPO Agent · Toni-SM/skrl · Discussion #87 · GitHub