i am using a trained caffemodel for inference with TensorRT, but i can’t get it to use more than one GPU.
I am using TensorRT 3 with CUDA 9, Python, Ubuntu 16.04 and 4 GTX 1080 Ti’s.
I tried it with:
os.environ['CUDA_VISIBLE_DEVICES'] = str(gpuNumber)
but all processes still only use the ‘first’ GPU.