How to run TensorRT on a muliti-GPU platform.

Hello. I have a question on how to run TRT on a muilti-GPU platform.
My computer have two GPUs,one of TAITAN xp,another is 1080Ti.
I want to run two net inference on those GPUs at same time. How should I do?

Now I can run sampleUfSSD on one GPU.

Hello,

Per https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#faq

Each ICudaEngine object is bound to a specific GPU when it is instantiated, either by the builder or on deserialization. To select the GPU, use cudaSetDevice() before calling the builder or deserializing the engine. Each IExecutionContext is bound to the same GPU as the engine from which it was created. When calling execute() or enqueue(), ensure that the thread is associated with the correct device by calling cudaSetDevice() if necessary.