Multiple TensorRT models Inference on Jetson Orin

I have optimized Tensorflow/Keras Models to TensoRT using TF-TRT and am getting proper outputs.
When I try to run multiple models at the same time - am getting GPU issues.
is there any function in TRT which is similar to Tensorflow’s memory growth function: tf.config.experimental.set_memory_growth(gpus[0], True)?
TensorRT Verison :
Jetpack Version : 5.1.2-b104
Tensorflow Version : 2.12.0+nv23.6


You can configure the memory maximum with setMemoryPoolLimit at building time.
So TensorRT will pick up the algorithm that can run in the given memory limit.

This is a pure TensorRT API. Please check if this has been exported to the TF-TRT interface.


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.