How can I infer sequentially using two independent TensorRT models?

Hi,
I want to make application on Jetson Nano. I have two tensorflow model. But models loading and prediction is not enough to speed. Therefore, I convert tensorflow models to TensorRT. But when I load a model in TensorRT, I cannot load the second model at the same time because memory_allocates it.Every time I have to load the models over and over. I need to load both models sequentially and get inferences sequentially. what should i do for this

Thanks

Hi,

Would you mind sharing more about why you cannot load the models together?

In general, you should create the engine with these two models first.
After you get the input, you can choose how to launch the inference job via enqueue or execute API.

https://docs.nvidia.com/deeplearning/tensorrt/api/c_api/classnvinfer1_1_1_i_execution_context.html

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.