TensorRt Engines On Nvidia Xavier Nx

Hey I have been working with deep learning model for image segmentation using Tensoorrt on Jetson Xavier Nx.

I am using jetpack 4.6.

My question is can I deserialize and load multiple tensorrt engines to memory for inferencing ? I would also want to switch between these engine as per the input ?

Hi,

YES, you can.
But please make sure you have created a separated CUDA context for each model.

Below is an example for your reference:

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.