Please provide complete information as applicable to your setup.
• Hardware Platform : NVIDIA A10 • DeepStream Version: 6.3 • TensorRT Version: 8.5.1.7-1 • NVIDIA GPU Driver Version (valid for GPU only): 535.154.05 • Issue Type( questions) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
**Issue: **
We have a yolov3 based object detector as PGIE with animal being one of the class. We have efficientnet based classifier acting as SGIE which classifies animal object.
We are not getting the SGIE inference result for some instances of animal class. While the expectation was that all instances should be classified.
We have gone through the issue below:
The same solution in above forum post does not apply to us as we are not using the nvtracker in the pipeline.
Pipeline Structure
uridecodebin->videorate->(nvstreammux(2sources per nvstreammux)->pgie->sgie->nvstreamdemux)->nvvideoconvert->nvdsosd->capsfilter->nvv4l2h264enc->rtph264pay->udpsink
[property]
gpu-id=0
net-scale-factor=0.00392156862
tlt-encoded-model=/opt/paralaxiom/vast/platform/nvast/ds_vast_pipeline/d24_feb2024_tf2_efficientnet-b1_epoch_070-b1_4_01_fp32.etlt
model-engine-file=/opt/paralaxiom/vast/platform/nvast/ds_vast_pipeline/d24_feb2024_tf2_efficientnet-b1_epoch_070-b1_4_01_fp32.etlt_b1_gpu0_fp32.engine
labelfile-path=/opt/paralaxiom/vast/platform/nvast/ds_vast_pipeline/classmap_d24_feb2024.txt
tlt-model-key=nvidia_tlt
offsets=103.939;116.779;123.68
batch-size=1
# 0=FP32 and 1=INT8 mode
network-mode=0
infer-dims=3;258;258
# infer-dims=3;224;224
process-mode=2
model-color-format=0
gpu-id=0
classifier-async-mode=0
gie-unique-id=2
# MAKE SURE THE OPERATE-ON-GIE-ID MATCHES THE GIE-UNIQUE-ID OF PRIMARY DETECTOR MODEL
operate-on-gie-id=101
# THIS MODEL IS A ANIMAL CLASSIFIER MAKE SURE THE BELOW PARAMETER VALUE MATCHES WITH ANIMAL VALUE IN LABEL FILE
operate-on-class-ids=0
# is-classifier=1
# uff-input-blob-name=input_1
output-blob-names=Identity:0
#output-blob-names=predictions/Softmax
network-input-order=0
# classifier-async-mode=1
classifier-threshold=0
scaling-filter=5
network-type=1 # defines that the model is a classifier.
#scaling-compute-hw=0
##
# output-tensor-meta=1
# network-type=100
I ran the classifier as pgie using one of sample app.
Many of the images with low resolutions did not got the results.
When running these images below error occured.
The model resolution is 258x258 and many of the images are of very low resolution than this.
Images with resolution less than 20x15 encountered this error.
I also have attached the config file used for running the deepstream sample app
error log
Unknown or legacy key specified 'is-classifier' for group [property]
0:00:00.796830348 26798 0x55560daeb730 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/vast/gaurav/classifier/d25_apr0324_tf2_efficientnet-b1_epoch_062_v2.etlt_b1_gpu0_fp32.engine
0:00:00.796892332 26798 0x55560daeb730 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/vast/gaurav/classifier/d25_apr0324_tf2_efficientnet-b1_epoch_062_v2.etlt_b1_gpu0_fp32.engine
0:00:00.798732750 26798 0x55560daeb730 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<nvinfer0> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-classify-pgie-test/dstest_appsrc_config.txt sucessfully
** (gst-launch-1.0:26798): CRITICAL **: 16:34:13.641: gst_nvds_buffer_pool_alloc_buffer: assertion 'mem' failed
ERROR: from element /GstPipeline:pipeline0/GstNvStreamMux:mux: Failed to allocate the buffers inside the Nvstreammux output pool
Additional debug info:
gstnvstreammux.c(791): gst_nvstreammux_alloc_output_buffers (): /GstPipeline:pipeline0/GstNvStreamMux:mux
ERROR: pipeline doesn't want to preroll.
run_accuracy_test.sh: line 15: Setting pipeline to PAUSED ...
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input:0 3x258x258
1 OUTPUT kFLOAT Identity:0 4
Pipeline is PREROLLING ...
WARNING: from element /GstPipeline:pipeline0/GstNvStreamMux:mux: Rounding muxer output width to the next multiple of 8: 264
Additional debug info:
gstnvstreammux.c(2795): gst_nvstreammux_change_state (): /GstPipeline:pipeline0/GstNvStreamMux:mux
WARNING: from element /GstPipeline:pipeline0/GstNvStreamMux:mux: Rounding muxer output height to the next multiple of 4: 260
Additional debug info:
gstnvstreammux.c(2803): gst_nvstreammux_change_state (): /GstPipeline:pipeline0/GstNvStreamMux:mux
Cuda failure: status=700
Error(-1) in buffer allocation
Setting pipeline to NULL ...: No such file or directory
20x15 is a very small resolution, you can use input-object-min-width or
input-object-min-height to filter the small resolution. if still need to infer on this picture, testing this picture, does the application fail every time? can you try scaling-filter=0?
Testing a 20x15 picture, I can’t reproduce this issue in the sample mentioned above.
from the log shared, there is a log "output width to the next multiple of 8: 1984 ", seems you are using a different command-line. could you provide the picture, model and comnad-line to help reproduce this issue? You can use forum private email. please click forum avatar-> personal messages->new message. Thanks!
About the command used to generate the results. I was using below mentioned command which was nt so different from the command recommended by tutorial. I have only added the nvbuf-memory-type=3 property for nvvideoconvert.
On my part i used wrong resolution which caused following message to occur
output width to the next multiple of 8: 1984
Even with the original command in the sample application same error was repeated.
Using the images you shared, I did two tests, one is testing the model mentioned in the tutorial, the other one is testing the model your shared, and I can’t reproduce that “Failed to allocate” issue on DeepStream 6.4 docker with rtx 6000. here are the test details. is there any difference with your test? 1.sh (571 Bytes) test1.txt (109.6 KB) test2.txt (105.2 KB)
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks