SGIE: Classify very small objects

• Hardware Platform Jetson
• DeepStream Version 5.0.1
• JetPack Version 4.4
• TensorRT Version 7.1
• Issue Type: Question / Bug
• How to reproduce the issue ?

Hi, I am trying to run a DS pipeline with a Yolov5 detector and a SGIE for classifying one of my four Yolo classes starting with testapp 2 (I tried CPP and Python). The detector works fine. I managed to get the classifier to work with a custom parser. However, the classifier only works on a single detected Bounding-Box, no matter how many of the specified class were detected by the detector.

So in my case 3 “tags” are detected, which should be forwarded to the sgie. However only one of them gets the expected attribute annotation. Printing some debug statements shows, that the custom parser is only called once for one box. It should be called three times, though.

Which parts of the config could be of interest for you to look at?
Any ideas why this might happen?

EDIT: See my reply.

My SGIE config is:

[property]
gpu-id=0
gie-unique-id=2
model-color-format=0
model-engine-file=../models/tag-256.engine

net-scale-factor=0.0039215686274509803921568627450980392156862745098039215686274509
#force-implicit-batch-dim=1
batch-size=1
process-mode=2
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
is-classifier=1
output-blob-names=tag_num
classifier-async-mode=1
classifier-threshold=0.01

input-object-min-width=0
input-object-min-height=0
input-object-max-width=0
input-object-max-height=0

operate-on-gie-id=1
operate-on-class-ids=3;
output-tensor-meta=1

parse-classifier-func-name=NvDsInferClassiferParseVGG
custom-lib-path=../lib/lib_vggparser.so

Debugging nvinfer.cpp showed that the 2 detected objects which are not classified are smaller than the hard coded MIN_WIDTH (HEIGHT) of 16px. Can I somehow pad them to 16x16?

Please also disable classifier-async-mode=1 in your sgie config.
Also would you mind to share the pgie config?

Which code are you referring?

My PGIE config looks like this:

[property]
gpu-id=0
net-scale-factor=0.0039215686274509803921568627450980392156862745098039215686274509

#0=RGB, 1=BGR
model-color-format=0
model-engine-file=../models/20201217_s_640_b1.engine
labelfile-path=../labels/primary_labels.txt
process-mode=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
network-type=0
num-detected-classes=4
gie-unique-id=1
output-blob-names=prob
## 0=Group Rectangles, 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
cluster-mode=4
maintain-aspect-ratio=1
interval=4

parse-bbox-func-name=NvDsInferParseCustomYoloV5
custom-lib-path=../lib/libnvdsinfer_custom_impl_Yolo_45_25_1.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
pre-cluster-threshold=0.2

[class-attrs-0]
pre-cluster-threshold=0.4

[class-attrs-3]
pre-cluster-threshold=0.1

Why should i disable async mode? I thought it would help with performance.

The code I am referring to is in gst-plugins/gst-nvinfer/gstnvinfer.cpp:

line 59:

#define MIN_INPUT_OBJECT_WIDTH 16
#define MIN_INPUT_OBJECT_HEIGHT 16

line 783:

 nvinfer->min_input_object_width =
      MAX(MIN_INPUT_OBJECT_WIDTH, nvinfer->min_input_object_width);
  nvinfer->min_input_object_height =
      MAX(MIN_INPUT_OBJECT_HEIGHT, nvinfer->min_input_object_height);

line 1439 following:

if (obj_meta->rect_params.width < nvinfer->min_input_object_width)
    return FALSE;

This results in everything smaller than 16x16 being ignored. I tried changing the #define values, but this just crashed the app elsewhere in the code.

Thanks for any advice!

Yeah, you are right, currently, we cannot infer on the object smaller than 16x16, you can see the comments in gstnvinfer.cpp.

  /* Should not infer on objects smaller than MIN_INPUT_OBJECT_WIDTH x MIN_INPUT_OBJECT_HEIGHT
   * since it will cause hardware scaling issues. */

Yeah, it will improve perf, but sometimes it will cause classifier meta data miss.