parallel_inference.txt (28.5 KB)
I’m trying to build a parallel inferencing with deepstream and jetson. While running this code it got stuck at the end of the file without displaying the any video. I have tried various types extensions like .mp4, .h264 and rtsp. Could anyone help me to go through this problem? I have attached my python code and error in my terminal.
The error is:
Frames will be saved in /nvme0n1/deepstream_parallel_inference/output3.h264
Creating Pipeline
Creating streamux
Creating source_bin: 0 Creating H264Parser Creating Decoder
/nvme0n1/deepstream_parallel_inference/deepstream_imagedata-multistream.py:427: DeprecationWarning: Gst.Element.get_request_pad is deprecated
decoder.get_static_pad(“src”).link(streammux.get_request_pad(padname))
Creating source_bin: 1 Creating H264Parser Creating Decoder
Linked elements in pipeline
<gi.GstNvStreamPad object at 0xffff73a12840 (GstNvStreamPad at 0xaaaadc5d1460)>
Added bus message handler
Now playing…
0 : /nvme0n1/deepstream_parallel_inference/output3.h264
1 : /nvme0n1/deepstream_parallel_inference/output3.h264
Starting pipeline
Opening in BLOCKING MODE
Opening in BLOCKING MODE
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
0:00:05.076562348 2485774 0xaaaadc681cd0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/nvme0n1/deepstream_parallel_inference/resnet18_facedetectir_pruned.etlt_b2_gpu0_fp32.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x544x960
1 OUTPUT kFLOAT output_bbox/BiasAdd 4x34x60
2 OUTPUT kFLOAT output_cov/Sigmoid 1x34x60
0:00:05.406817066 2485774 0xaaaadc681cd0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /nvme0n1/deepstream_parallel_inference/resnet18_facedetectir_pruned.etlt_b2_gpu0_fp32.engine
0:00:05.416663047 2485774 0xaaaadc681cd0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:config_infer_primary_peoplenet_qat.txt sucessfully
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
Frame Number=0 Number of Objects=0 Face_count=0
Frame Number=0 Number of Objects=0 Face_count=0
0:00:09.680420337 2485774 0xaaaadc681cd0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/nvme0n1/deepstream_parallel_inference/resnet18_facedetectir_pruned.etlt_b2_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x544x960
1 OUTPUT kFLOAT output_bbox/BiasAdd 4x34x60
2 OUTPUT kFLOAT output_cov/Sigmoid 1x34x60
0:00:10.034208942 2485774 0xaaaadc681cd0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /nvme0n1/deepstream_parallel_inference/resnet18_facedetectir_pruned.etlt_b2_gpu0_int8.engine
0:00:10.040155846 2485774 0xaaaadc681cd0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:config_infer_primary_peoplenet_qat1.txt sucessfully
^C[NvMultiObjectTracker] De-initialized
[NvMultiObjectTracker] De-initialized
Saving pipeline graph in folder frames/
sh: 1: dot: not found