A secondary nvinfer does not handle objects smaller than 16x16

• Hardware Platform: GPU
• DeepStream Version: 6.3
• TensorRT Version:
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type: Bug

When investigating the reason for missing detections from secondary GIE:s in our application we found out that there is a limitation in nvinfer which does not handle objects smaller than 16x16 from the primary GIE. This limitation is only for secondary GIE:s; if the object is instead cropped and sent to a primary GIE it is handled regardless of size.

The reason for this is in gst-plugins/gst-nvinfer/gstnvinfer.cpp at line 861

/* Should not infer on objects smaller than MIN_INPUT_OBJECT_WIDTH x MIN_INPUT_OBJECT_HEIGHT
  * since it will cause hardware scaling issues. */
nvinfer->min_input_object_width =
    MAX(MIN_INPUT_OBJECT_WIDTH, nvinfer->min_input_object_width);
nvinfer->min_input_object_height =
    MAX(MIN_INPUT_OBJECT_HEIGHT, nvinfer->min_input_object_height);

We have tested changing the valued of MIN_INPUT_OBJECT_WIDTH and MIN_INPUT_OBJECT_HEIGHT to 1 and recompiled nvinfer. With that change the objects smaller than 16x16 are handled properly by secondary GIE:s. I don’t know what the “hardware scaling issue” mentioned in the comment in the code refers to, but there seems to be no such scaling issues in our case as everything works properly for smaller objects.

We will need to use a patched version of nvinfer as long as this limitation is present, so it would be very good if it could get fixed.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

You are right to change the MIN_INPUT_OBJECT_WIDTH and MIN_INPUT_OBJECT_HEIGHT macros to 1s, 16 is for the hardware limitation of Jetson platform. The code is open source, you can change according to your platform.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.