I encounter the CUDA out of memory issue when I convert the model from pytorch to tensorRT via torch2trt package.
Is the CUDA or tensorRT provide the such arguments which let the converting operation could use the muilt-GPU RAM.
My GPU device is RTX2080ti
CUDA version :10.0
CuDNN version : 7.5
TensorRT version :18.104.22.168