Tensorrt on a machine with 2 GPUs

Hello,

I am using TensorRT 5 for inference on a single RTX GPU. I am limited by the amount of memory available in the GPU because I need to run a lot of different nets in parallel.

So my question is, if I have two identical RTX cards on my machine, linked by SLI, may I run inferences on the two cards (I mean, will I double the available memory for inference in my program ?)

Do I need to modify my code to handle on which card each net is run, or is it handled automatically by the library ?

Thanks for your help,
Denis