Isaac sim reinforcement learning

What are the steps for reinforcement learning in Isaac sim 2023 and I want to debug the code in any good way。Right now I don’t know which file to write my custom reinforcement learning code

I think you should follow the document and reach out to us if you are stuck somewhere or see any errors.
https://docs.omniverse.nvidia.com/isaacsim/latest/isaac_gym_tutorials/index.html

/Isaac sim2023.1.0/standalone_example/api/omni.isaac.gym only have cartpole_train.py,cartpole_task.py,cartpole_play.py.Do you have shadowhand_train.py,shadowhand.task.py and shadowhand.py.you can help us remotely operate it ,and I can also play some fees for it. Looking forward to your reply.Thank you very very much.

Hi @1779004521 - You can find shadowhand example in this place: https://github.com/NVIDIA-Omniverse/OmniIsaacGymEnvs/blob/main/omniisaacgymenvs/tasks/shadow_hand.py

But there is only the task code of ShadowHand, and the training code is all yaml configuration files, I want to find the task code and training code of Shadowhand, is there any code related to ShadowHand like cartpole_task.py and cartpole_train.py and cartpole_play.py. cartpole_task.py and cartpole_train.py and cartpole_play.py I found under the Isaac sim installation path under the /standalone/api/omni.isaac.gym path.

Hi,

In Isaac Gym there aren’t single files for training/test based on the task you’re looking for.

If I understood correctly you’re looking for the scripts/rlgames_train.py file that you run with the python script provided by Isaac Sim (commonly setted to PYTHON_PATH as done here).

This file is referred as both your cartpole_train.py (the file used for training) and your cartpole_play.py (the file used for testing the trained policy) based on the parameters you give (more info based on shadowhand task here to determine if you’re running training or testing).

So what the script does is looking for the task you provide in the task parameter, search for its configuration file under cfg/task, append the PPO string (usually, for SAC implementation look on the the Ant task how to do it) for the model configuration under cfg/train and build up the training with the corresponding task mapped in the utils/task_util.py dictionary called task_map.

If you want to customize the training process I’ll start from there (even if I’ve never tried since it works fine the one provided).

(for every path I refer that you’re inside the OmniIsaacGymEnvs repository and inside the main folder omniisaacgymenvs)

Hope it helps,
Elia

Thank you for your reply, as you understand then can I customize the reinforcement learning task in Isaac sim, or do I have to customize the reinforcement learning task in Isaac gym. I’ve always wanted to use the Isaac gym to customize reinforcement learning, and the OIGE example can be run, but I don’t know how to find a compiler to interact with the OIGE code. Do you have a tutorial for custom reinforcement learning in detail?

Hi there, there are multiple ways to run reinforcement learning with Isaac Sim. One example is to use Isaac Sim directly with the Stable Baselines3 RL library, which is shown here: 9.8. Custom RL Example using Stable Baselines — Omniverse IsaacSim latest documentation. This example defines an individual training and task script, but does not support vectorized parallel training due to limitations of the Stable Baselines3 library.

Another method of using Isaac Sim for RL is by following the OmniIsaacGymEnvs framework. In this repo, we provide a tasking framework for launching RL trainings using the scripts/rlgames_train.py script and the rl-games library. This supports vectorized parallel training, and allows us to implement more complex tasks, such as ShadowHand. New tasks can be defined by adding a new task script, along with a task and training config files for the task. Please see https://github.com/NVIDIA-Omniverse/OmniIsaacGymEnvs/blob/main/README.md#training-scripts for details on the launch script and https://github.com/NVIDIA-Omniverse/OmniIsaacGymEnvs/blob/main/docs/framework.md for documentation of the OIGE framework. 9.2. Creating a new RL Example in OmniIsaacGymEnvs — Omniverse IsaacSim latest documentation also provides a more detailed walk-through of the framework and examples.

I’ve installed OIGE and I can run the Cartpole example, but where do I write the code, I open OIGE with vscode, and a lot of packages in it show an error. Of course, I also used the extended workflow .//isaac-sim.gym.sh --<isaac_sim_root>ext-folder </parent/directory/to/OIGE> but it shows an error, is there any other way for me to write code in OIGE。

Please make sure you have the latest Isaac Sim 2023.1.0-hotfix-1 release and the latest OIGE updates to run the extension workflow. Code can be created/edited in VSCode or any other text editor. The python scripts do not need to be compiled. You can first try making small changes on existing tasks in OIGE to get started.

Ok, thanks for the reply, I’m going to give it a try, but I still have a doubt about the force sensor, I saw that the tutorial says that the force sensor was removed in 2023.1.0, so is there any replacement sensor. I recently wanted to add a few force sensors to the ShadowHand model and observe the value of the sensor, and use the code API to get the value of the force it observes, what should I do?

Force sensors have been deprecated in favour of a set of new APIs available in ArticulationView to retrieve forces. You can check out the ShadowHand example for an example usage of the API: https://github.com/NVIDIA-Omniverse/OmniIsaacGymEnvs/blob/main/omniisaacgymenvs/tasks/shadow_hand.py#L133


Can this effort_sensor be used for force detection?

The EffortSensor can be used to track the torque or force applied to individual joints, but it does not provide a vectorized API for collecting this information across multiple environments. If you are working with parallel training using multiple environments, it is recommended to use the APIs available in the View classes.


The following picture is on the left, running a custom reinforcement learning task under the folder in standalone/api/omni.isaac.gym, and now I need to add a camera and other sensors to the model’s object detection and other tasks, so I imported the camera on the right side (draw a red horizontal line) but to run the shadowhand_train.py error as shown in the figure

We want to use the camera and other sensors in the cartpole in standalone/api/omni.isaac.gym, but directly use from isaac.omni.sensor import Camera always gives an error saying that there is no sensor module, at present I know that to use this sensor you need to use these two sentences from omni.isaac.kit import SimulationApp, simulation_app = SimulationApp({“headless”: False}), but quoting these two sentences directly in cartpole_task.py will cause the simulation environment to crash, which may not seem appropriate, so what should I do?