Could TensorRT support muilt-GPU for model converting via torch2trt

Hi all,

I encounter the CUDA out of memory issue when I convert the model from pytorch to tensorRT via torch2trt package.

Is the CUDA or tensorRT provide the such arguments which let the converting operation could use the muilt-GPU RAM.

My GPU device is RTX2080ti
CUDA version :10.0
CuDNN version : 7.5
TensorRT version :5.1.5.0

Thanks!