Interactive ML/RL

Dear Members,

We have a usecase for interactive RL (user input in the training loop). At the moment we currently use ML Agents in Unity3D for the same, but we cannot really scale up the simulations using GPU.

I was just wondering if Gym would be right tool for the job? I was thinking about something on the lines of Unity (as a front end for user interaction, connect to Gym via Python API), and IsaacGym to run the simulations in GPU and send the data back? Any thoughts on this would be really appreciated. I am fairly new to the Nvidia eco system.

Or does Omniverse eco system has a better toolset suited for such usecases? (We aim for the interaction in VR. That means the solution has some compaitlilbyty with OpenVR/XR)

Thank you.

Hi @krishnanpc,

Can you provide some more details about your use case, how does your training environment looks like? Maybe to show a video or screenshots from training in Unity ML Agents?

Thank you for your reply.

The long term plan is to create an interactive real time version of the original Karl Sims experiment (evolving virtual creatures).

I drew inspiration from the following video

Skip to around 6.5 min mark foe the RL side and 10 min mark foe the genetic algorithm side. Essentially what is done is the morphology of the creature is “evolved” using a genetic algorithm and in each generation, RL (in this case curriculum learning) is used to train the creature to reach a goal.

My thought was to do something similar, if we can speed up the RL, then each of the training process for happen much faster and hopefully we can in realtime evolve the morphology. And for the learning (the RL side) here would also include a user input (maybe controller position from VR).

So what i was thinking is have Unity front end do the morphology evolution and pass the morphology to IsaacGym and gym would do the simulations and send the “brain parameters” back to Unity.

Any thoughts on this would be greatly appreciated.