Multi GPU training using MIG

Hi all,

I recently got access to a GPU cluster that uses the MIG technology to slice larger GPU cards.
Now Iam wondering if there is any way to use several slices for multi-GPU training (e.g. Pytorch’s nn.DistributedDataParallel)?
For non MIG devices one can specify several GPUs using CUDA_VISIBLE_DEVICES=0,1,2… . MIG also supports CUDA_VISIBLE_DEVICES, but that does only work for a single GPU.

Thanks in advance,

M