Problem - DeepStream 5.1 in Jetson Nano

Please provide complete information as applicable to your setup.

• Jetson Nano 4GB
• DeepStream 5.1
• JetPack 4.5.1
• TensorRT 7.1.3
• Bug & Question
• First of all, I installed Deepstream; DeepStream Python Apps (link) followed by Run the Sample Applications (link) by instaling pre requirements and run sample applications
**• I ran a example as listed in here in the directory

/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-test2

ran the commands (camera is RPI module v2):

python3 deepstream_test_2.py /dev/video0

Resultant report:
Creating Pipeline

Creating Source 

Creating H264Parser 

Creating Decoder 

Creating EGLSink 

Playing file /dev/video0 
Adding elements to Pipeline 

Linking elements in the Pipeline 

Starting pipeline 


Using winsys: x11 
Opening in BLOCKING MODE
Opening in BLOCKING MODE 
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-test2/../../../../samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine open error
0:00:04.741783866 18502      0x25e76d0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary3-nvinference-engine> NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 4]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-test2/../../../../samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:00:04.741963245 18502      0x25e76d0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary3-nvinference-engine> NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 4]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-test2/../../../../samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:00:04.741997308 18502      0x25e76d0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary3-nvinference-engine> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 4]: Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
INFO: [TRT]: Detected 1 inputs and 1 output network tensors.
ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_fp32.engine opened error
0:01:23.158087336 18502      0x25e76d0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary3-nvinference-engine> NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1744> [UID = 4]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_fp32.engine
INFO: [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x224x224       
1   OUTPUT kFLOAT predictions/Softmax 6x1x1           

0:01:23.414782028 18502      0x25e76d0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary3-nvinference-engine> [UID 4]: Load new model:dstest2_sgie3_config.txt sucessfully
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-test2/../../../../samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine open error
0:01:23.427962084 18502      0x25e76d0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 3]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-test2/../../../../samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:01:23.428011565 18502      0x25e76d0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 3]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-test2/../../../../samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:01:23.428045315 18502      0x25e76d0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 3]: Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
INFO: [TRT]: Detected 1 inputs and 1 output network tensors.
ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_fp32.engine opened error
0:01:46.015230173 18502      0x25e76d0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1744> [UID = 3]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_fp32.engine
INFO: [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x224x224       
1   OUTPUT kFLOAT predictions/Softmax 20x1x1          

0:01:46.284446100 18502      0x25e76d0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary2-nvinference-engine> [UID 3]: Load new model:dstest2_sgie2_config.txt sucessfully
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-test2/../../../../samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine open error
0:01:46.297188647 18502      0x25e76d0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 2]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-test2/../../../../samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:01:46.297241669 18502      0x25e76d0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 2]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-test2/../../../../samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:01:46.297276253 18502      0x25e76d0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 2]: Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
INFO: [TRT]: Detected 1 inputs and 1 output network tensors.
ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_fp32.engine opened error
0:02:08.242594979 18502      0x25e76d0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1744> [UID = 2]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_fp32.engine
INFO: [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x224x224       
1   OUTPUT kFLOAT predictions/Softmax 12x1x1          

0:02:08.612123903 18502      0x25e76d0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary1-nvinference-engine> [UID 2]: Load new model:dstest2_sgie1_config.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-test2/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:02:10.198968285 18502      0x25e76d0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-test2/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:02:10.199023911 18502      0x25e76d0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-test2/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:02:10.199057245 18502      0x25e76d0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_fp16.engine opened error
0:02:34.886605228 18502      0x25e76d0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1744> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:02:34.983893896 18502      0x25e76d0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest2_pgie_config.txt sucessfully

Tried too replacing:

/dev/video0

by a sample video:

python3 deepstream_test_2.py …/…/…/…/samples/streams/sample_720p.h264

The resultant report is different and works. Take maybe 2 minutes to start reading the first frame and in the middle, reports a error:

Frame Number=1417 Number of Objects=4 Vehicle_count=2 Person_count=2
Frame Number=1418 Number of Objects=5 Vehicle_count=3 Person_count=2
Frame Number=1419 Number of Objects=7 Vehicle_count=5 Person_count=2
Frame Number=1420 Number of Objects=7 Vehicle_count=5 Person_count=2
Frame Number=1421 Number of Objects=6 Vehicle_count=4 Person_count=2
Frame Number=1422 Number of Objects=6 Vehicle_count=4 Person_count=2
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:nvvideo-renderer:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:nvvideo-renderer:
There may be a timestamping problem, or this computer is too slow.
Frame Number=1423 Number of Objects=6 Vehicle_count=4 Person_count=2
Frame Number=1424 Number of Objects=6 Vehicle_count=4 Person_count=2
Frame Number=1425 Number of Objects=8 Vehicle_count=6 Person_count=2
Frame Number=1426 Number of Objects=7 Vehicle_count=5 Person_count=2
Frame Number=1427 Number of Objects=7 Vehicle_count=5 Person_count=2

Requirements w/ this post:
I would like to know whats wrong or missing in order to run this example with a .mp4 file of with a live stream using a CSI Camera.

• MY FUTURE USE CASE:
I would like to count people and vehicles that cross a street using CSI Camera. I dont know if using these deepstream examples are a good start to detect, classify and count them using a line (stop bar detection).
Let me know if there are better solutions for my use case. Or improvements, like retrain the used detector in this example using a own dataset.

Thanks :)

Hi,
By default the sample take h264 stream as input source:

filesrc ! h264parse ! nvv4l2decoder ! nvstreammux ! ...

For camera source, you need to customize the source to run like:

nvarguscamerasrc bufapi-version=1 ! nvstreammux ! ...

I am newbie here. Which examples and documentation should I read in order to apply this option to read live stream using a RPI camera v2

Hi,
We have similar code in deepstream-app. Please refer to create_camera_source_bin() in

/opt/nvidia/deepstream/deepstream-5.1/sources/apps/apps-common/src/deepstream_source_bin.c

Right. It is better to reuse the example deepstream_test_1_usb.py because it seems similar in order to configure CSI camera?

Hi,
It demonstrates detection and recognition in deepstream-test2. If you only need detection or has only single model, it is good to refer to deepstream-test1.

I want to use deepstream-test2 to detect and classify objects from a CSI camera and not from .h264 file.

Hi,
We would suggest check existing samples and do customization. Or other users may have similar implementation and share the patch.