Interactive ML/RL

Dear Members,

We have a usecase for interactive RL (user input in the training loop). At the moment we currently use ML Agents in Unity3D for the same, but we cannot really scale up the simulations using GPU.

I was just wondering if Gym would be right tool for the job? I was thinking about something on the lines of Unity (as a front end for user interaction, connect to Gym via Python API), and IsaacGym to run the simulations in GPU and send the data back? Any thoughts on this would be really appreciated. I am fairly new to the Nvidia eco system.

Or does Omniverse eco system has a better toolset suited for such usecases? (We aim for the interaction in VR. That means the solution has some compaitlilbyty with OpenVR/XR)

Thank you.
Krishnan

Hi @krishnanpc,

Can you provide some more details about your use case, how does your training environment looks like? Maybe to show a video or screenshots from training in Unity ML Agents?

Thank you for your reply.

The long term plan is to create an interactive real time version of the original Karl Sims experiment (evolving virtual creatures).
https://www.karlsims.com/evolved-virtual-creatures.html

I drew inspiration from the following video https://youtu.be/y6tXEK4fUU8

Skip to around 6.5 min mark foe the RL side and 10 min mark foe the genetic algorithm side. Essentially what is done is the morphology of the creature is “evolved” using a genetic algorithm and in each generation, RL (in this case curriculum learning) is used to train the creature to reach a goal.

My thought was to do something similar, if we can speed up the RL, then each of the training process for happen much faster and hopefully we can in realtime evolve the morphology. And for the learning (the RL side) here would also include a user input (maybe controller position from VR).

So what i was thinking is have Unity front end do the morphology evolution and pass the morphology to IsaacGym and gym would do the simulations and send the “brain parameters” back to Unity.

Any thoughts on this would be greatly appreciated.

Thanks
Krishnan

Hi @krishnanpc,

We faced a similar problem with you, we currently use Unity3D to do a simulation and use ML agents or other RL modules to train. And we also want to use IsaacGym to accelerate the training. May I ask if the NVIDIA team has further replied and how do you solve the problem?

I would appreciate it if you could reply.

Thanks,
Jayden.

Hello Jayden,

No. I didn’t get any responses.

I am still looking fro a solution, a colleague suggested to write own engine in C++ using GPU physx, but at the moment the work is on stall.

I would be interested do know if you have any solution/ideas?

Hi Krishnan,

Thanks for your reply.
I don’t have any idea right now. We can share information here if there are any updates in the future.

Hi @vmakoviychuk ,

Thank you for your reply.

sorry for the necro post, but thought it is better to post it here than in a new thread.

So for a simple example, we have a few robots trying to follow a course and the convergence point is when the robot learns to navigate the course without running into obstacles.Our obstacles are real people (we use person detectors), we deploy this environment in Mixed Reality (so ultimately this simulation runs in “real world”)

See video: Cloudstore

Note: we use the velocity of rigid body for movement and have 5 sensors around which raycasts and retrieve the obstacle and hit points.

Currently running a pure CPU, C # implementation (object-oriented), so not efficient in terms of performance. I wondered if it is possible to use IsaacGym to sync the simulation with Unity rather than reconstruct everything in IsaacGym again.

If yes, what would be the best way to communicate between IsaacGym and Unity, I tried using the USD method, but didn’t really work and then I found out USD support for Unity is still not available.

Thank you
Krishnan