Does TensorRT support multi-gpu inference?

Hello We are an algorithm developer using DRIVE PX2.

We are using the algorithms of SSD and YOLO to carry out cognitive processes.

However, I have performance issue and want to inference using both igpu and dgpu.

So does TensorRT support multi-gpu inference?

In my opinion, that’s not good idea. because dGPU supports only FP32 and INT8 and iGPU supports only FP32, FP16.
Your engine run on only FP32 mode, it can’t be serious problem. but If you hope use INT8 or FP16, it will face
big issue. I recommend only one GPU for one algorithm. FP32 has very low performance than others.

Thank you…