SGIE doesnt give the inference for all detected objects

Please provide complete information as applicable to your setup.

• Hardware Platform : NVIDIA A10
• DeepStream Version: 6.3
• TensorRT Version: 8.5.1.7-1
• NVIDIA GPU Driver Version (valid for GPU only): 535.154.05
• Issue Type( questions)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

**Issue: **
We have a yolov3 based object detector as PGIE with animal being one of the class. We have efficientnet based classifier acting as SGIE which classifies animal object.
We are not getting the SGIE inference result for some instances of animal class. While the expectation was that all instances should be classified.

We have gone through the issue below:

The same solution in above forum post does not apply to us as we are not using the nvtracker in the pipeline.

Pipeline Structure
uridecodebin->videorate->(nvstreammux(2sources per nvstreammux)->pgie->sgie->nvstreamdemux)->nvvideoconvert->nvdsosd->capsfilter->nvv4l2h264enc->rtph264pay->udpsink

PGIE Config:

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=/opt/paralaxiom/vast/platform/nvast/ds_vast_pipeline/d24_feb0824_labels.txt
# maintain-aspect-ratio=0
output-tensor-meta=0
#model-engine-file=final-model-int8-pruned.etlt.etlt_b1_gpu0_int8.engine
model-engine-file=/opt/paralaxiom/vast/platform/nvast/ds_vast_pipeline/d24_feb0824_apm_fframe_yolov4_resnet18_epoch_036_drop4.etlt_b1_gpu0_int8.engine
#int8-calib-file=cal-pruned.bin
int8-calib-file=/opt/paralaxiom/vast/platform/nvast/ds_vast_pipeline/d24_feb0824_apm_fframe_yolov4_resnet18_epoch_036_drop4.bin
#tlt-encoded-model=final-model-int8-pruned.etlt
tlt-encoded-model=/opt/paralaxiom/vast/platform/nvast/ds_vast_pipeline/d24_feb0824_apm_fframe_yolov4_resnet18_epoch_036_drop4.etlt
tlt-model-key=nvidia_tlt
infer-dims=3;1056;1888
maintain-aspect-ratio=0
uff-input-order=0
uff-input-blob-name=Input
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=11
interval=0
gie-unique-id=101
is-classifier=0
# network-type=0
#no cluster
cluster-mode=3
output-blob-names=BatchedNMS
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/libnvds_infercustomparser.so
#scaling-filter=1

[class-attrs-all]
pre-cluster-threshold=0.5

[class-attrs-0]
post-cluster-threshold=0.8

** SGIE Config**

[property]
gpu-id=0
net-scale-factor=0.00392156862
tlt-encoded-model=/opt/paralaxiom/vast/platform/nvast/ds_vast_pipeline/d24_feb2024_tf2_efficientnet-b1_epoch_070-b1_4_01_fp32.etlt
model-engine-file=/opt/paralaxiom/vast/platform/nvast/ds_vast_pipeline/d24_feb2024_tf2_efficientnet-b1_epoch_070-b1_4_01_fp32.etlt_b1_gpu0_fp32.engine
labelfile-path=/opt/paralaxiom/vast/platform/nvast/ds_vast_pipeline/classmap_d24_feb2024.txt
tlt-model-key=nvidia_tlt
offsets=103.939;116.779;123.68
batch-size=1
# 0=FP32 and 1=INT8 mode
network-mode=0
infer-dims=3;258;258
# infer-dims=3;224;224
process-mode=2
model-color-format=0
gpu-id=0
classifier-async-mode=0
gie-unique-id=2
# MAKE SURE THE OPERATE-ON-GIE-ID MATCHES THE GIE-UNIQUE-ID OF PRIMARY DETECTOR MODEL 
operate-on-gie-id=101
# THIS MODEL IS A ANIMAL CLASSIFIER MAKE SURE THE BELOW PARAMETER VALUE MATCHES WITH ANIMAL VALUE IN LABEL FILE
operate-on-class-ids=0
# is-classifier=1
# uff-input-blob-name=input_1
output-blob-names=Identity:0
#output-blob-names=predictions/Softmax
network-input-order=0
# classifier-async-mode=1
classifier-threshold=0
scaling-filter=5
network-type=1 # defines that the model is a classifier.
#scaling-compute-hw=0
##
# output-tensor-meta=1
# network-type=100
  1. can you see the pgie’s output bbox? there is no classification results or the calssification results are wrong?
  2. can the sgie model work well testing by the other tools? if yes, to simplify, please refer to this FAQ 19. [DSx_All_App] How to use classification model as pgie? to test the model directly.
  1. We are able to see the PGIE output for the same, but not its SGIE output. There is no classification for some instances of animal class.

  2. Thanks for sharing the article i will try running classifier as pgie on some imsges

Sorry for the late reply, Is this still an DeepStream issue to support? Thanks!

Sorry for the late reply.
We are still facing this issue.
Meanwhile we will also run these images in standalone pgie classifier in deepstream.

if testing sgie by other tools works fine, you can compare the parameters of preprocess. please refer to the explanation doc.

I ran the classifier as pgie using one of sample app.
Many of the images with low resolutions did not got the results.
When running these images below error occured.

The model resolution is 258x258 and many of the images are of very low resolution than this.
Images with resolution less than 20x15 encountered this error.

I also have attached the config file used for running the deepstream sample app

error log

Unknown or legacy key specified 'is-classifier' for group [property]
0:00:00.796830348 26798 0x55560daeb730 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/vast/gaurav/classifier/d25_apr0324_tf2_efficientnet-b1_epoch_062_v2.etlt_b1_gpu0_fp32.engine
0:00:00.796892332 26798 0x55560daeb730 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/vast/gaurav/classifier/d25_apr0324_tf2_efficientnet-b1_epoch_062_v2.etlt_b1_gpu0_fp32.engine
0:00:00.798732750 26798 0x55560daeb730 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<nvinfer0> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-classify-pgie-test/dstest_appsrc_config.txt sucessfully

** (gst-launch-1.0:26798): CRITICAL **: 16:34:13.641: gst_nvds_buffer_pool_alloc_buffer: assertion 'mem' failed
ERROR: from element /GstPipeline:pipeline0/GstNvStreamMux:mux: Failed to allocate the buffers inside the Nvstreammux output pool
Additional debug info:
gstnvstreammux.c(791): gst_nvstreammux_alloc_output_buffers (): /GstPipeline:pipeline0/GstNvStreamMux:mux
ERROR: pipeline doesn't want to preroll.
run_accuracy_test.sh: line 15: Setting pipeline to PAUSED ...
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input:0         3x258x258       
1   OUTPUT kFLOAT Identity:0      4               

Pipeline is PREROLLING ...
WARNING: from element /GstPipeline:pipeline0/GstNvStreamMux:mux: Rounding muxer output width to the next multiple of 8: 264
Additional debug info:
gstnvstreammux.c(2795): gst_nvstreammux_change_state (): /GstPipeline:pipeline0/GstNvStreamMux:mux
WARNING: from element /GstPipeline:pipeline0/GstNvStreamMux:mux: Rounding muxer output height to the next multiple of 4: 260
Additional debug info:
gstnvstreammux.c(2803): gst_nvstreammux_change_state (): /GstPipeline:pipeline0/GstNvStreamMux:mux
Cuda failure: status=700
Error(-1) in buffer allocation
Setting pipeline to NULL ...: No such file or directory

Classifier conf file

[property]
# net-scale-factor=0.00392156862 
tlt-encoded-model=/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-classify-pgie-test/gr/classifier/d25_apr0324_tf2_efficientnet-b1_epoch_062_v2.etlt
model-engine-file=/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-classify-pgie-test/gr/classifier/d25_apr0324_tf2_efficientnet-b1_epoch_062_v2.etlt_b1_gpu0_fp32.engine
labelfile-path=/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-classify-pgie-test/gr/classifier/classmap_d24_feb2024.txt
tlt-model-key=nvidia_tlt
network-input-order=0
offsets=103.939;116.779;123.68
infer-dims=3;258;258
network-type=1
maintain-aspect-ratio=1
output-tensor-meta=0
model-color-format=0
classifier-threshold=0
batch-size=1
network-mode=0
interval=0
gie-unique-id=1
output-blob-names=Identity:0
cluster-mode=2
is-classifier=1
scaling-filter=5
symmetric-padding=1

20x15 is a very small resolution, you can use input-object-min-width or
input-object-min-height to filter the small resolution. if still need to infer on this picture, testing this picture, does the application fail every time? can you try scaling-filter=0?

I want result for all intsances so using minimum height or width criteria is not a option for us.

I tried it with scaling-filter=0. I got the same error.

I dont see why this config change should help as the streammux itself is throwing error.

I reran it with gst bebug level 5.

attaching the generated log file
classify.log (6.2 MB)

Testing a 20x15 picture, I can’t reproduce this issue in the sample mentioned above.
from the log shared, there is a log "output width to the next multiple of 8: 1984 ", seems you are using a different command-line. could you provide the picture, model and comnad-line to help reproduce this issue? You can use forum private email. please click forum avatar-> personal messages->new message. Thanks!

About the command used to generate the results. I was using below mentioned command which was nt so different from the command recommended by tutorial. I have only added the nvbuf-memory-type=3 property for nvvideoconvert.

On my part i used wrong resolution which caused following message to occur
output width to the next multiple of 8: 1984

Even with the original command in the sample application same error was repeated.

gst-launch-1.0 filesrc location="$1"/"$OUTPUT" ! jpegdec ! videoconvert ! video/x-raw,format=I420 ! nvvideoconvert nvbuf-memory-type=3 ! video/x-raw\(memory:NVMM\),format=NV12 ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1920 height=1080 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-classify-pgie-test/dstest_appsrc_config.txt ! nvvideoconvert nvbuf-memory-type=3 ! video/x-raw\(memory:NVMM\),format=RGBA ! nvdsosd ! nvvideoconvert nvbuf-memory-type=3 ! video/x-raw,format=I420 ! jpegenc ! filesink location="$outDir"

I have also sent you the relevant model files and images over the message

Using the images you shared, I did two tests, one is testing the model mentioned in the tutorial, the other one is testing the model your shared, and I can’t reproduce that “Failed to allocate” issue on DeepStream 6.4 docker with rtx 6000. here are the test details. is there any difference with your test?
1.sh (571 Bytes) test1.txt (109.6 KB) test2.txt (105.2 KB)

I also tried running on another machine.

It worked as intended on another machine. I will monitor it in pipeline where classifier is sgie.

Sorry for the late reply, Is this still an DeepStream issue to support? Thanks!

I am still facing the similar issue in my application. Ill generate the logs in there and share the same here.

For now I think this is deepstream issue. cause the same behavior is also there for other models as well.

Sorry for the late reply, Is this still an DeepStream issue to support? Thanks!

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.