Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson Xavier AGX
• DeepStream Version 6.0.1
• JetPack Version (valid for Jetson only) 4.6
• TensorRT Version 8.2.1
I running deepstream-app and a python script correctly when I provide just 1 video source, but it crashes when 2 or more streams are provided. It seems that there is something I misunderstood at the config file. Note that, when I change the config file (currently for yolov5) and use another config file (for traficnet, for instance) the pipeline is created OK, even with 2 video streams. This is the error I get:
ubuntu@ubuntu:~/edge$ python3 test4_yolov5.py -i rtsp://admin:hbyt12345@10.21.45.19:554 rtsp://admin:hbyt12345@10.21.45.19:554
(gst-plugin-scanner:31261): GStreamer-WARNING **: 10:42:13.426: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libcustom2d_preprocess.so’: /usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libcustom2d_preprocess.so: undefined symbol: NvBufSurfTransformAsync
(gst-plugin-scanner:31261): GStreamer-WARNING **: 10:42:13.505: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so’: libtritonserver.so: cannot open shared object file: No such file or directory
(gst-plugin-scanner:31261): GStreamer-WARNING **: 10:42:13.515: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_preprocess.so’: /usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_preprocess.so: undefined symbol: NvBufSurfTransformAsync
(gst-plugin-scanner:31261): GStreamer-WARNING **: 10:42:13.540: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so’: librivermax.so.0: cannot open shared object file: No such file or directory
Creating Pipeline …
Creating stream-muxer
Creating stream-demuxer
Creating source bin source-bin-00 ( rtsp://admin:hbyt12345@10.21.45.19:554 )
Creating uri-decode-bin
Creating source bin source-bin-01 ( rtsp://admin:hbyt12345@10.21.45.19:554 )
Creating uri-decode-bin
Creating primary-gpu-inference-engine
PGIE batch size : 1
WARNING: Overriding infer-config batch-size 1 with number of sources 2Creating nvtracker
Creating nvtee
Creating nvtiler
Creating convertor 0
Creating convertor 1
Creating convertor tile
Creating onscreendisplay 0
Creating onscreendisplay 1
Creating onscreendisplay tile
Creating convertor_postosd 0
Creating convertor_postosd 1
Creating convertor_postosd tile
Creating capsfilter 0
Creating capsfilter 1
Creating capsfilter tile
Creating h264-encoder 0
Creating h264-encoder 1
Creating h264-encoder tile
Creating rtp-h264-payload 0
Creating rtp-h264-payload 1
Creating rtp-h264-payload tile
Creating udp-sink 0
Creating udp-sink 1
Creating udp-sink tile
Adding elements to Pipeline
Linking elements in the Pipeline
demux source 0demux source 1
*** Launched RTSP Streaming at rtsp://localhost:8554/stream0 ***
*** Launched RTSP Streaming at rtsp://localhost:8554/stream1 ***
*** Launched RTSP Streaming at rtsp://localhost:8554/tiled ***Starting pipeline
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Opening in BLOCKING MODE
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
0:00:01.964180124 31259 0x1bc2da30 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.
WARNING: [TRT]: Output type must be INT32 for shape outputs
WARNING: [TRT]: Output type must be INT32 for shape outputs
WARNING: [TRT]: Output type must be INT32 for shape outputs
WARNING: [TRT]: Output type must be INT32 for shape outputs
WARNING: [TRT]: Output type must be INT32 for shape outputs
WARNING: [TRT]: Output type must be INT32 for shape outputsBuilding the TensorRT Engine
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
Building complete0:01:59.968817713 31259 0x1bc2da30 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1947> [UID = 1]: serialize cuda engine to file: /home/ubuntu/edge/model_b2_gpu0_fp32.engine successfully
INFO: [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input 3x640x640
1 OUTPUT kFLOAT boxes 25200x4
2 OUTPUT kFLOAT scores 25200x1
3 OUTPUT kFLOAT classes 25200x10:01:59.997884196 31259 0x1bc2da30 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1833> [UID = 1]: Backend has maxBatchSize 1 whereas 2 has been requested
0:01:59.997928294 31259 0x1bc2da30 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1966> [UID = 1]: deserialized backend context :/home/ubuntu/edge/model_b2_gpu0_fp32.engine failed to match config params
0:02:00.032278914 31259 0x1bc2da30 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:02:00.032343685 31259 0x1bc2da30 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:02:00.032389767 31259 0x1bc2da30 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:02:00.032411369 31259 0x1bc2da30 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Config file path: /home/ubuntu/edge/config/dstest4_pgie_nvinfer_yolov5_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
[NvMultiObjectTracker] De-initialized
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-gpu-inference-engine:
Config file path: /home/ubuntu/edge/config/dstest4_pgie_nvinfer_yolov5_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Exiting app— 0.011171579360961914 seconds —
And this is the script
test4_yolov5.py.txt (24.5 KB) and the config file
dstest4_pgie_nvinfer_yolov5_config.txt (756 Bytes)