Yolov4 using nvtracker

Hi all.

I met one problem. It’s OK if I use deepstream-app to run inference on videos with same model.

When I use this code to run yolov4 on camera source. Without nvtracker or write the pipeline as nvinfer → tiler → tracker, the program can run normally, it can detect objects.

But when I tried to add tracker behind nvinfer and before tiler. The program will fail. The error is this

gstnvtracker: NvBufSurfTransform failed with error -3 while converting buffergstnvtracker: Failed to convert input batch.
0:00:09.743364326 16216   0x55aee9d280 WARN                 nvinfer 
gstnvinfer.cpp:1975:gst_nvinfer_output_loop:<nvinfer0> error: Internal data stream error.
0:00:09.743410920 16216   0x55aee9d280 WARN                 nvinfer 
gstnvinfer.cpp:1975:gst_nvinfer_output_loop:<nvinfer0> error: streaming stopped, reason error (-5)
gstnvtracker: NvBufSurfTransform failed with error -3 while converting buffergstnvtracker: Failed to convert input batch.
gstnvtracker: NvBufSurfTransform failed with error -3 while converting buffergstnvtracker: Failed to convert input batch.
gstnvtracker: NvBufSurfTransform failed with error -3 while converting buffergstnvtracker: Failed to convert input batch.
Segmentation fault (core dumped)

This is the code I am using.

camera0_pipe = gst_parse_launch("nvarguscamerasrc sensor-id=0 bufapi-version=true ! tee name=c0 \
                                nvarguscamerasrc sensor-id=1 bufapi-version=true ! tee name=c1 \
                                nvarguscamerasrc sensor-id=2 bufapi-version=true ! tee name=c2 \
                                nvarguscamerasrc sensor-id=3 bufapi-version=true ! tee name=c3 \
                                c0. ! video/x-raw(memory:NVMM), framerate=30/1, format=(string)NV12 ! nvvidconv ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, framerate=30/1, format=(string)NV12 ! queue ! m.sink_0 \
                                c1. ! video/x-raw(memory:NVMM), framerate=30/1, format=(string)NV12 ! nvvidconv ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, framerate=30/1, format=(string)NV12 ! queue ! m.sink_1 \
                                c2. ! video/x-raw(memory:NVMM), framerate=30/1, format=(string)NV12 ! nvvidconv ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, framerate=30/1, format=(string)NV12 ! queue ! m.sink_2 \
                                c3. ! video/x-raw(memory:NVMM), framerate=30/1, format=(string)NV12 ! nvvidconv ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, framerate=30/1, format=(string)NV12 ! queue ! m.sink_3 \
                                nvstreammux name=m batch-size=4 width=1920 height=1080 batched-push-timeout=4000000 \
                                ! nvinfer config-file-path=./deepstreamrelated-master/Pytorch_Yolo_V4/config_infer_primary_yoloV4_b8_int8.txt interval=12 ! nvmultistreamtiler width=1920 height=1080 rows=2 columns=2 ! nvtracker tracker-width=608 tracker-height=608 ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so ! nvvideoconvert ! nvdsosd ! queue ! nvegltransform ! nveglglessink sync=false async=false ", NULL);

gst_element_set_state(camera0_pipe,GST_STATE_PLAYING);`

So nvtracker can be only put behind tiler? Because I want to use broker to send out data and I want to have sensor id, the sensor-id will be fix to 0 after tiler.

• Hardware Platform (Jetson / GPU):jetson xavier
• DeepStream Version:5.0
• JetPack Version (valid for Jetson only):4.4
• TensorRT Version:7.1.3
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs):questions

Thanks in advance

Hi @346842280,
Could you refer to GitHub - NVIDIA-AI-IOT/yolov4_deepstream ?

Thanks!

Hi mchi. Thank you for your reply.

There is one question. When using deepstream-app, how would it arrange pipeline, where would it put nvinfer, tracker and tiler? For my case, if the pipeline is nvinfer → nvtracker → tiler, it will crash, but nvinfer → tilr → nvtracker could work

Hi @346842280,
nvvidconv is not usable in DeepStream since DeepStream plugin is using NvBufSurface, but nvvidconv can’t accept it. Please use “nvvideoconvert” instead of “nvvidconv” in DeepStream.

deepstream-app pipeline is kind of fixed, It will be nvinfer->nvtracker->tiler only

Please refer to Frequently Asked Questions — DeepStream 6.1.1 Release documentation

Thanks. These are very helpful

Hi mchi, I met a problem.

Although the difference between nvvidconv and nvvideoconvert solved some problems, but since I am using interpipe from ridgerun, I think using gst-launch-1.0 to open pipeline with interpipe will affect the nvbufsurface uasge

I just want to make sure, for now I can only arrange the pipeline as : interpipesrc → nvinfer → nvtracker → tiler ->osd → …, if it’s interpipesrc → nvinfer → tiler ->nvtracker ->osd → …, it will have

gstnvtracker: NvBufSurfTransform failed with error -3 while converting buffergstnvtracker: Failed to convert input batch.
0:00:10.083468927 17913   0x55a9c3e850 WARN               nvtracker 
gstnvtracker.cpp:565:gst_nv_tracker_submit_input_buffer:<nvtracker0> error: Failed to submit input to tracker
0:00:10.083771503 17913   0x55a9c3e850 WARN                 nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop:<nvinfer0> error: Internal data stream error.
0:00:10.083815313 17913   0x55a9c3e850 WARN                 nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop:<nvinfer0> error: streaming stopped, reason error (-5)
ERROR: from element /GstPipeline:pipeline0/GstNvTracker:nvtracker0: Failed to submit input to tracker

so is this related to the nvbufsurface usage in nvinfer and nvtracker? If so, is there a way to restore the nvbufsurface and pass it to nvtracker

I’m not clear about the internal data struct manipulation, I may make mistake about this

best regards

Hi 346842280,

Please help to open a new topic for your issue. Thanks