Hello,
Is there an easy way to infer RL Policies created in Isaac Gym into Isaac Cortex? I use the isaac cortex workflow with a real robot. The problem is, as I understand it, the real robot follows the position of the simulated robot. However, when a policy is executed, the simulated robot shouldfollow the real one, otherwise small differences between simulation and real world would lead to failure of the policies.
My approach would be to send the actions generated by the policy directly to the ros controller of the real robot. However, the real robot must no longer follow the simulated robot, but vice versa.
Is there a workfllow for this Problem?
It actually follows the commanded positions of the simulated robot, so the simulated (belief) robot and the real robot are independently following the same commanded positions. The MotionCommander gives a good example of how to set up a joint position control policy (commanders are policies with command APIs) so you might try replacing that.
We’re actively working to make it easier to deploy your own trained policies with a focus on real-world deployment, so doing this should be easier in a future release. We’re also relaxing the requirement of running the simulator in the loop as the belief representation, so deployment will be more flexible.