The rules for setting the batch size for the object detector in deepstream

I have two detectors in my pipeline: The first detector detects the existence of two classes of objects. The second detector (YOLO v4) take the instances of the class 1 an try to detect the existence of 27 types of objects inside it.

I work only on one source video file.
When setting the size of the batch size of the two detectors to 1, everything is ok.
But when trying to increase the batch size to 2 for one of the two detectors (with generation of the right TRT engine file for every one of them), I got the following error:

(python3:15065): GStreamer-CRITICAL **: 15:46:05.343: gst_mini_object_unref: assertion ‘(g_atomic_int_get (&mini_object->lockstate) & LOCK_MASK) < 4’ failed
ERROR: Batch size not 1
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:nvvideo-renderer:
There may be a timestamping problem, or this computer is too slow.

This is my setup:

  • NVIDIA Jetson Xavier NX (Developer Kit Version)
    • Jetpack UNKNOWN [L4T 32.4.4]
    • NV Power Mode: MODE_15W_6CORE - Type: 2
    • jetson_stats.service: active
  • Libraries:
    • CUDA: 10.2.89
    • cuDNN: 8.0.0.180
    • TensorRT: 7.1.3.0
    • Visionworks: 1.6.0.501
    • OpenCV: 4.1.1 compiled CUDA: NO
    • VPI: 0.4.4
    • Vulkan: 1.2.70

Hey, can you make sure the 2 detector models can support inference for batchsize>1 ?

Yes, I converted the two detectors networks to onnx with batch size 2 and then to TRT plan files on the same machine with the same batch size 2.

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Which app are you using? can you share your nvinfer config file with us?