Pipeline loading time is too long

Hi,

I am using deepstream_test1 to try some experiments, the problems is that when a execute the script, it takes too much time to load the model (around 3 minutes), It is possible to reduce this time?

This is probably because every time you run it, tensorRT has to build the model (pgie). What you can do to speed it up is in your config file point it to an already built model file - just like the deepstream-app does.

i.e. in pgie config file:

model-file=…/…/…/…/samples/models/Primary_Detector_Nano/resnet10.caffemodel
proto-file=…/…/…/…/samples/models/Primary_Detector_Nano/resnet10.prototxt
model-engine-file=…/…/…/…/samples/models/Primary_Detector_Nano/resnet10.caffemodel_b8_fp16.engine
labelfile-path=…/…/…/…/samples/models/Primary_Detector_Nano/labels.txt

The relative paths I’ve used above are assuming your custom app is running in the same folder as the nvidia provided test apps.

Thanks man, it perfectly works.

1 Like