Capable of running multiple Inference concurrently on the same GPU using TensorRT?

Can I run multiple Inference simultaneously on one GPU using TensorRT on the Jetson TX2?

If yes, How to do?

Thanks.