Error trying running pretrained models TrafficCamNet using deepstream

I’m using Jetson Nano Developer Kit and trying to test TrafficCamNet.
I’ve installed the latest DeepStream 5.1 on Jetpack 4.5.1.
I’ve followed the manual and downloaded the pretrained model.
While running the deepstream-app, I’m getting the following error output:

Warning: ‘input-dims’ parameter has been deprecated. Use ‘infer-dims’ instead.

Using winsys: x11
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models/…/…/models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine open error
0:00:05.986383843 15928 0x36664760 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models/…/…/models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine failed
0:00:05.986502490 15928 0x36664760 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models/…/…/models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine failed, try rebuild
0:00:05.986541241 15928 0x36664760 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
parseModel: Failed to open TLT encoded model file /opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models/…/…/models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:05.987127240 15928 0x36664760 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 1]: build engine file failed
Bus error

It seems that the resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine file failed to be created.
I failed to find that file or convert it using tlt-converter.
Is there another way to create it?

Hi,

Nano doesn’t support INT8 precision.
Only fp32 and fp16 are available.

Could you try it again with fp32 or fp16 mode?

Thanks.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.