Yolov4 using nvtracker

Hi all.

I met one problem. It’s OK if I use deepstream-app to run inference on videos with same model.

When I use this code to run yolov4 on camera source. Without nvtracker or write the pipeline as nvinfer -> tiler -> tracker, the program can run normally, it can detect objects.

But when I tried to add tracker behind nvinfer and before tiler. The program will fail. The error is this

gstnvtracker: NvBufSurfTransform failed with error -3 while converting buffergstnvtracker: Failed to convert input batch.
0:00:09.743364326 16216   0x55aee9d280 WARN                 nvinfer 
gstnvinfer.cpp:1975:gst_nvinfer_output_loop:<nvinfer0> error: Internal data stream error.
0:00:09.743410920 16216   0x55aee9d280 WARN                 nvinfer 
gstnvinfer.cpp:1975:gst_nvinfer_output_loop:<nvinfer0> error: streaming stopped, reason error (-5)
gstnvtracker: NvBufSurfTransform failed with error -3 while converting buffergstnvtracker: Failed to convert input batch.
gstnvtracker: NvBufSurfTransform failed with error -3 while converting buffergstnvtracker: Failed to convert input batch.
gstnvtracker: NvBufSurfTransform failed with error -3 while converting buffergstnvtracker: Failed to convert input batch.
Segmentation fault (core dumped)

This is the code I am using.

camera0_pipe = gst_parse_launch("nvarguscamerasrc sensor-id=0 bufapi-version=true ! tee name=c0 \
                                nvarguscamerasrc sensor-id=1 bufapi-version=true ! tee name=c1 \
                                nvarguscamerasrc sensor-id=2 bufapi-version=true ! tee name=c2 \
                                nvarguscamerasrc sensor-id=3 bufapi-version=true ! tee name=c3 \
                                c0. ! video/x-raw(memory:NVMM), framerate=30/1, format=(string)NV12 ! nvvidconv ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, framerate=30/1, format=(string)NV12 ! queue ! m.sink_0 \
                                c1. ! video/x-raw(memory:NVMM), framerate=30/1, format=(string)NV12 ! nvvidconv ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, framerate=30/1, format=(string)NV12 ! queue ! m.sink_1 \
                                c2. ! video/x-raw(memory:NVMM), framerate=30/1, format=(string)NV12 ! nvvidconv ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, framerate=30/1, format=(string)NV12 ! queue ! m.sink_2 \
                                c3. ! video/x-raw(memory:NVMM), framerate=30/1, format=(string)NV12 ! nvvidconv ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, framerate=30/1, format=(string)NV12 ! queue ! m.sink_3 \
                                nvstreammux name=m batch-size=4 width=1920 height=1080 batched-push-timeout=4000000 \
                                ! nvinfer config-file-path=./deepstreamrelated-master/Pytorch_Yolo_V4/config_infer_primary_yoloV4_b8_int8.txt interval=12 ! nvmultistreamtiler width=1920 height=1080 rows=2 columns=2 ! nvtracker tracker-width=608 tracker-height=608 ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so ! nvvideoconvert ! nvdsosd ! queue ! nvegltransform ! nveglglessink sync=false async=false ", NULL);

gst_element_set_state(camera0_pipe,GST_STATE_PLAYING);`

So nvtracker can be only put behind tiler? Because I want to use broker to send out data and I want to have sensor id, the sensor-id will be fix to 0 after tiler.

• Hardware Platform (Jetson / GPU):jetson xavier
• DeepStream Version:5.0
• JetPack Version (valid for Jetson only):4.4
• TensorRT Version:7.1.3
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs):questions

Thanks in advance

Hi @346842280,
Could you refer to https://github.com/NVIDIA-AI-IOT/yolov4_deepstream ?

Thanks!

Hi mchi. Thank you for your reply.

There is one question. When using deepstream-app, how would it arrange pipeline, where would it put nvinfer, tracker and tiler? For my case, if the pipeline is nvinfer -> nvtracker -> tiler, it will crash, but nvinfer -> tilr -> nvtracker could work

Hi @346842280,
nvvidconv is not usable in DeepStream since DeepStream plugin is using NvBufSurface, but nvvidconv can’t accept it. Please use “nvvideoconvert” instead of “nvvidconv” in DeepStream.

deepstream-app pipeline is kind of fixed, It will be nvinfer->nvtracker->tiler only

Please refer to https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_FAQ.html#how-do-i-obtain-individual-sources-after-batched-inferencing-processing-what-are-the-sample-pipelines-for-nvstreamdemux

Thanks. These are very helpful