Deepstream yolov4 multisource v4l2 with TensorRT engine {Error:Failed to enqueue buffer in fulldims mode}

• Hardware Platform--------Jetson
• DeepStream Version-------5.0
• JetPack Version -----------4.4
• TensorRT Version----------7.1.3.0

Hello
I follow these steps to incorporate yolov4.

I checked yolov4 working and created onnx file, specifying batch=4
python3 demo_darknet2onnx.py yolov4-tiny.cfg yolov4-tiny.weights ./data/traffic.jpg 4

Then I created successfully TensorRT engine with
/usr/src/tensorrt/bin/trtexec --onnx=yolov4_4_3_416_416_static.onnx --explicitBatch --saveEngine=yolov4_4_3_416_416_fp16.engine --workspace=8192 --fp16

Problem: When I run deepstream-app, I see momentaraly tiled window which goes off with following error.

deepstream-app -c ./deepstream_app_config_yoloV4_multiplev4l2.txt

Unknown or legacy key specified 'is-classifier' for group [property]
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_nvdcf.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvDCF] Initialized
0:00:03.491618765  8800     0x35d58c70 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/home/nvidia/Downloads/pytorch-YOLOv4/yolov4_4_3_416_416_fp16.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input           3x416x416       
1   OUTPUT kFLOAT boxes           2535x1x4        
2   OUTPUT kFLOAT confs           2535x80         

0:00:03.491800598  8800     0x35d58c70 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /home/nvidia/Downloads/pytorch-YOLOv4/yolov4_4_3_416_416_fp16.engine
0:00:03.502218541  8800     0x35d58c70 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/nvidia/Downloads/pytorch-YOLOv4/DeepStream/config_infer_primary_yoloV4_batch4.txt sucessfully

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.

** INFO: <bus_callback:181>: Pipeline ready

** INFO: <bus_callback:167>: Pipeline running

WARNING: Backend context bufferIdx(0) request dims:3x3x416x416 is out of range, [min: 4x3x416x416, max: 4x3x416x416]
ERROR: Failed to enqueue buffer in fulldims mode because binding idx: 0 with batchDims: 3x3x416x416 is not supported 
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_INVALID_PARAMS
0:00:03.937694797  8800     0x35d9c0a0 WARN                 nvinfer gstnvinfer.cpp:1216:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR from primary_gie: Failed to queue input batch for inferencing
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1216): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
Quitting
[NvDCF] De-initialized
WARNING: Backend context bufferIdx(0) request dims:3x3x416x416 is out of range, [min: 4x3x416x416, max: 4x3x416x416]
ERROR: Failed to enqueue buffer in fulldims mode because binding idx: 0 with batchDims: 3x3x416x416 is not supported 
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_INVALID_PARAMS
0:00:04.102367880  8800     0x35d9c0a0 WARN                 nvinfer gstnvinfer.cpp:1216:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
App run failed

deepstream_app_config_yoloV4_multiplev4l2.txt (5.5 KB) config_infer_primary_yoloV4_batch4.txt (3.4 KB)

Btw: when I perform all the above steps for batch = 1, yolo works fine. Could you help me, where I am going wrong !!

Thanks. :)

Since you are using local video, could you remove “live-source=1” in [streammux] and try again ?

1 Like

Thanks it solved the problem.
Thanks for meticulous reading.

In batch display, it doesn’t show tracked object class/ID. It just shows the bounding box. Is it normal?

No, it should be able to show object class/ID also with batch display.