YOLOV3 and YOLOV3-tiny Loading time in Deepstream


First a little bit of background, I generated the TensorRT engine (in FP16) for YOLOV3 and YOLOV3-tiny following the example in Deepstream for the TX2 (I am simulating the TX 4GB).

Now I understand that the engine generations takes a few minutes, but once I have my serialized engine and I want to run the example it takes at least 10 seconds to have the first frame using YOLOV3 and 5 seconds using YOLOV3-tiny. There is a way to reduce the loading time of the engine or it is a fix loading time ?


Have you maximized the device performance first?

sudo nvpmodel -m 0
sudo jetson_clocks

The start time includes initialing pipeline(GStreamer), loading libraries(cuDNN, TensorRT, …) and deserializing engine file.
You won’t be able to skip these steps.