I created a custom bbox parser for my custom onnx model which I’m using with nvinfer plugin.
Everything works fine when I’m using it as primary detector but when I set the same to secondary it just stops after giving some warnings.
• Hardware Platform: JETSON NANO
• DeepStream Version : 5.0
• JetPack Version : 4.4DP
• TensorRT Version : 7.0
Pipeline is PREROLLING ...
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
0:02:35.326956106 8953 0x55632acde0 WARN nvinfer gstnvinfer.cpp:1240:convert_batch_and_push_to_input_thread:<nvinfer1> error: NvBufSurfTransform failed with error -2 while converting buffer
ERROR: from element /GstPipeline:pipeline0/GstNvInfer:nvinfer1: NvBufSurfTransform failed with error -2 while converting buffer
Additional debug info:
/dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1240): convert_batch_and_push_to_input_thread (): /GstPipeline:pipeline0/GstNvInfer:nvinfer1
Execution ended after 0:00:04.095490590
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
0:02:35.329451363 8953 0x55632acde0 WARN nvinfer gstnvinfer.cpp:1946:gst_nvinfer_output_loop:<nvinfer0> error: Internal data stream error.
0:02:35.330817900 8953 0x55632acde0 WARN nvinfer gstnvinfer.cpp:1946:gst_nvinfer_output_loop:<nvinfer0> error: streaming stopped, reason error (-5)
Setting pipeline to NULL ...
Freeing pipeline ...
Hi,
Looks like gst_nvinfer_output_loop
is returning GST_FLOW_ERROR
that seems to happen when “pushing buffer to downstream element”. There is probably something wrong with the buffer. I would further debug the issue by checking the buffer in nvinfer (sources/gst-plugins/gst-nvinfer/gstnvinfer.cpp
).
bcao
June 15, 2020, 8:07am
3
Could you share your nvinfer config file for further check?
BTW, what’s the resolution of your input and have you do resize in streammux?
Resolution in streammux 1280x720
Config files for pgie and sgie.
pgie.txt (4.0 KB)
sgie.txt (3.5 KB)
Pipeline description:
gst-launch-1.0 uridecodebin uri=file:///home/Username/pipline/sample_day.mp4 ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvvideoconvert ! nvinfer config-file-path=/home/Username/pipline/redaction_with_deepstream/configs/pgie_config_fd_lpd.txt ! nvvideoconvert ! nvinfer config-file-path=/home/Username/Downloads/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/testocr/dstest1_pgie_config_cpy.txt ! nvvideoconvert ! nvdsosd ! nvvideoconvert ! fpsdisplaysink name=fpssink text-overlay=true video-sink=xvimagesink sync=0
Also, I couldn’t reproduce this error on any other model.
bcao
June 15, 2020, 10:12am
5
What’s the input dims of your pgie/sgie model? I cannot find it in your config files.
For pgie I’m using sample model in redaction sample from Nvidia redaction_with_deepstream/fd_lpd_model at master · NVIDIA-AI-IOT/redaction_with_deepstream · GitHub
and for sgie, input dims are 3x200x200
bcao
June 23, 2020, 6:11am
8
Could you run the engine using trtexec?
Yes. It ran normally using trtexec.
bcao
June 23, 2020, 7:04am
10
OK, we have similar issue that the scale factor cannot be larger than 16, and we will fix it on next release.
currently can you modify the detected-min-h/detected-min-w of pgie both to lager than 15 since your network resolution is 3 * 200 * 200.
1 Like
That explains it. Thank you.