How to specify the GPU to do the inference when there are multiple GPUs installed?

I have a trained model which will be deployed to a server with multiple GPUs installed. I want to run the same model on all the GPUs, but I couldn’t find out the way to specify which GPU to use in the developer guide. Does anyone know how to select one particular GPU out of many to do the TensorRT inference?