parallel_inference.txt (19.3 KB)
jetson@ubuntu:/nvme0n1/deepstream_parallel_inference$ python3 deepstream_imagedata-multistream.py
Frames will be saved in /nvme0n1/deepstream_parallel_inference/output3.h264
Creating Pipeline
Creating streamux
Creating source_bin: 0 Creating H264Parser Creating Decoder
/nvme0n1/deepstream_parallel_inference/deepstream_imagedata-multistream.py:210: DeprecationWarning: Gst.Element.get_request_pad is deprecated
decoder.get_static_pad(“src”).link(streammux.get_request_pad(padname))
Creating source_bin: 1 Creating H264Parser Creating Decoder
(python3:3032647): GStreamer-WARNING **: 18:42:11.213: Trying to link elements nvtee-que2 and streamdemux-2 that don’t share a common ancestor: nvtee-que2 hasn’t been added to a bin or pipeline, but streamdemux-2 is in pipeline0
(python3:3032647): GStreamer-WARNING **: 18:42:11.213: Trying to link elements nvtee-que2 and streamdemux-2 that don’t share a common ancestor: nvtee-que2 hasn’t been added to a bin or pipeline, but streamdemux-2 is in pipeline0
Linked elements in pipeline
<gi.GstNvStreamPad object at 0xffff68d07e80 (GstNvStreamPad at 0xaaaaeb53d460)>
Added bus message handler
Now playing…
0 : /nvme0n1/deepstream_parallel_inference/output3.h264
1 : /nvme0n1/deepstream_parallel_inference/output3.h264
Starting pipeline
Opening in BLOCKING MODE
Opening in BLOCKING MODE
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
0:00:05.109113677 3032647 0xaaaaeb5ed760 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/nvme0n1/deepstream_parallel_inference/resnet18_facedetectir_pruned.etlt_b2_gpu0_fp32.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x544x960
1 OUTPUT kFLOAT output_bbox/BiasAdd 4x34x60
2 OUTPUT kFLOAT output_cov/Sigmoid 1x34x60
0:00:05.447388504 3032647 0xaaaaeb5ed760 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /nvme0n1/deepstream_parallel_inference/resnet18_facedetectir_pruned.etlt_b2_gpu0_fp32.engine
0:00:05.456709924 3032647 0xaaaaeb5ed760 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:config_infer_primary_peoplenet_qat.txt sucessfully
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
0:00:09.601986211 3032647 0xaaaaeb5ed760 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/nvme0n1/deepstream_parallel_inference/resnet18_facedetectir_pruned.etlt_b2_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x544x960
1 OUTPUT kFLOAT output_bbox/BiasAdd 4x34x60
2 OUTPUT kFLOAT output_cov/Sigmoid 1x34x60
0:00:09.953739400 3032647 0xaaaaeb5ed760 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /nvme0n1/deepstream_parallel_inference/resnet18_facedetectir_pruned.etlt_b2_gpu0_int8.engine
0:00:09.960647096 3032647 0xaaaaeb5ed760 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:config_infer_primary_peoplenet_qat1.txt sucessfully
I have uploaded my code along with my terminal output. The output got stuck at this point and then it doesnt execute any. What should i change to get best results.