Can Isaac Gym support multi-agent reinforcement learning?

Hi!
I know that Isaac Gym can run many environments in parallel, but I was wondering if it is possible to add two or more agents to one ‘grid’,and train the task of competition or cooperation?
If so how do I go about it, any suggestions?
Also, can a multi-agent environment be trained in parallel by Isaac Gym? Will the computational cost be high?

Hi
Inaccurate,“Ray” can also be imported, it may be possible to create a multi-agent environment by replacing OpenAi Gym with Isaac Gym.
However, it won’t be possible at this time

https://docs.ray.io/en/latest/rllib-env.html

  • DOCS
    “Isaac Gym includes a basic PPO implementation and a straightforward RL task system that can be used with it, but users may substitute alternative task systems and RL algorithms as desired.”

I seem to find the configuration for multi-agent tasks in the base class ‘vec_task.py’, which is called ‘numAgents’. But I don’t know how to use it, there doesn’t seem to be any code can refer to.

It doesn’t seem to be used.
The multi-agent is not mentioned in “USER GUIDE”.

How about this?

Following this topic, I am also working on this.

I am also keen to see if there are already established code samples for MARL, have yet to find any within the IsaacGymEnvs git repository.

What I have personally done:
I have tried modifying the Quadcopter task into a multi-agent variant, where the observation space which used to be size 21 is now 21 x N, and the action space which used to be 12 is now 12 x N,
from then on the RL portion is now up to us end users, where we do our own state/observation/action vector splitting or concatenating.

3 Likes

Thanks, I know this library. It has a multi-agent task in StarCraft II environment. Considering that there are multi-agent configurations in the base class, I think there is no problem to go multi-agent reinforcement learning through Isaac Gym. I just don’t know how to implement it and how much the calculation cost is.

We don’t currently have an example of multi-agent RL, but it should be possible to extend the framework to support that. @piggy4eva 's approach sounds like a great starting point. The numAgents variable is currently used for compatibility with RL Games, but it is set to 1 in all of our examples.

Hi @AshWang ,

Yes, multi-agent scenarios are possible in Isaac Gym, you can have any number of interacting robots in a single env. While it’s not available in the public release, I re-implemented OpenAI Ant sumo env in Isaac Gym and successfully trained it with rl-games, using only a single GPU. Adding @gstate to discuss if it would be possible to include an example of multi-agent env in one of the future updates.

1 Like

Hi @AshWang - it would definitely be cool to add a multi-agent example env to IsaacGymEnvs at some point. We’ll have to figure out timing for cleaning up @vmakoviychuk’s sumo env work for release, but also happy to take external contributions if someone else wants to make a PR for one.

Take care,
-Gav

1 Like

Hi @gstate ,

Thanks for the reply.
Does Preview 3 Release have everything in place to implement multi-agent reinforcement learning? if so I might try to implement it.
Looking forward to your good news anyway!

Yes - everything should be available to do this. You just need to ensure that the agents that will interact are in the same collision group. See the 1080 Balls of Solitude example to understand the collision filtering.

Take care,
-Gav

1 Like