Building TensorRT engine takes too long at each launch

Whenever we launch TensorRT using our own Yolo V3 model on the Jetson AGX Xavier, it takes Deepstream 4 to 5 minutes to build the TensorRT engine before the app can launch.
However when launching deepstream sample models, it only takes a few seconds. Anyone can point us in the right direction as to why Deepstream needs to always build the TensorRT engine and if there is a short cut, what do we have to do so we can luanch our Yolo V3 models as fast as the deepstream sample models.

Thanks a lot.

Hi,

You don’t need to build TensorRT engine each time.
This should be an one time job.

Please help to check if the TRT engine path is well set.
Ex.

model-engine-file=../../ models/Primary_Detector/resnet10. caffemodel_b4_int8.engine

DeepStream can launch TensorRT by deserializing the file without recompiling.
Thanks.