Can I inference two engine simultaneous on jetson using TensorRT?

I have two model, I want using TensorRT to acclerate them simultaneous, is it possible?

Is there a demo? any helo will be appreciate


But please remember that the GPU resource is limited. Task will need to wait if the device runs out of resource.
A quick test is to open two TensorRT sample in the different console.


how can run two engine in tensorrt in the same time or in a pipeline. please if have sample code share it. for define contexts(?), engines(), allocate memory(). so that share memory.
Many Thanks

If you want to use the DeepStream framework, which uses TensorRT internally, you can run two engines simultaniously if you put an nvinfer element into async mode. You can also run engines on separate gpus.

Please see the deepstream examples such as this one and the documentation.