Please provide complete information as applicable to your setup.
**• Hardware Platform (Jetson / GPU): Jeston
**• DeepStream Version: 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
**• Issue Type( questions, new requirements, bugs) bug
**• How to reproduce the issue ?
I can run deepstream-app with multi-source(rtsp), but the net is bad sometime. then, I get error like:
WARNING from src_elem15: Could not read from resource.
Debug info: gstrtspsrc.c(5293): gst_rtspsrc_loop_udp (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin15/GstRTSPS
rc:src_elem15:
Unhandled return value -7.
ERROR from src_elem15: Could not read from resource.
Debug info: gstrtspsrc.c(5361): gst_rtspsrc_loop_udp (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin15/GstRTSPS
rc:src_elem15:
Could not receive message. (System error)
ERROR from src_elem15: Internal data stream error.
Debug info: gstrtspsrc.c(5653): gst_rtspsrc_loop (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin15/GstRTSPSrc:s
rc_elem15:
streaming stopped, reason error (-5)
ERROR: [TRT]: Reshape_172: reshaping failed for tensor: 337
ERROR: [TRT]: shapeMachine.cpp (160) - Shape Error in executeReshape: reshape would change volume
ERROR: [TRT]: Instruction: RESHAPE_ZERO_IS_PLACEHOLDER{4 18 24 40} {16 3 6 24 40}
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
How can I fix it?
Thank you!