• Hardware Platform: GPU
• DeepStream Version: 6.3
• TensorRT Version: 8.5.3.1
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type: Bug
When investigating the reason for missing detections from secondary GIE:s in our application we found out that there is a limitation in nvinfer which does not handle objects smaller than 16x16 from the primary GIE. This limitation is only for secondary GIE:s; if the object is instead cropped and sent to a primary GIE it is handled regardless of size.
The reason for this is in gst-plugins/gst-nvinfer/gstnvinfer.cpp at line 861
/* Should not infer on objects smaller than MIN_INPUT_OBJECT_WIDTH x MIN_INPUT_OBJECT_HEIGHT
* since it will cause hardware scaling issues. */
nvinfer->min_input_object_width =
MAX(MIN_INPUT_OBJECT_WIDTH, nvinfer->min_input_object_width);
nvinfer->min_input_object_height =
MAX(MIN_INPUT_OBJECT_HEIGHT, nvinfer->min_input_object_height);
We have tested changing the valued of MIN_INPUT_OBJECT_WIDTH and MIN_INPUT_OBJECT_HEIGHT to 1 and recompiled nvinfer. With that change the objects smaller than 16x16 are handled properly by secondary GIE:s. I don’t know what the “hardware scaling issue” mentioned in the comment in the code refers to, but there seems to be no such scaling issues in our case as everything works properly for smaller objects.
We will need to use a patched version of nvinfer as long as this limitation is present, so it would be very good if it could get fixed.