Jetson Nano DS start times

Hi,

I’m using a Jetson NANO and when launching some of the sample apps I’m seeing a pretty long load times (15 seconds for deepstream-test1). I made sure that I am not compiling the models each time but that’s the only idea I had.

I’m wondering, is this normal? How can we minimize this value if at all? Our use case needs quicker launch times.

Thanks,

Here is what I see on the output when:

ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/sw-dssr/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:00:02.514354506 9869 0x55afaed430 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/sw-dssr/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:02.514434612 9869 0x55afaed430 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/sw-dssr/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:02.514467842 9869 0x55afaed430 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
^[[B^[[B^[[B^[[B^[[B^[[B^[[B^[[B^[[B^[[B^[[B^[[BINFO: [TRT]: Detected 1 inputs and 2 output network tensors.
0:00:50.527717094 9869 0x55afaed430 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1748> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:50.972411926 9869 0x55afaed430 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Running…

Hi,
The app checks if model-engine-file exists at startup. If it does not exist, it generates the file. The step takes some time. Please run with sudo in the first time and you should see model-engine-file in

/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-test1$ ll ../../../../samples/models/Primary_Detector/
-rw-r--r-- 1 root root 8202520 Jul 13 10:34 resnet10.caffemodel_b1_gpu0_fp16.engine

And set the file to dstest1_pgie_config.txt. You can eliminate the time of generating model-engine-file.

2 Likes

Worked!

thanks

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.