deepstream4.0 secondary classification problem

deepstream4.0/deepstream/deepstream_sdk_v4.0_x86_64/sources/apps/sample_apps/deepstream-app/deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt

We ran this directory and three problems arose.

  1. On the basis of source code, when we use [primary-gie][secondary-gie0][secondary-gie1][secondary-gie2], but don’t use track plug-in, why only bounding box, no classification?

  2. On the basis of source code(use [primary-gie][track][secondary-gie0][secondary-gie1][secondary-gie2]), why the first frame only have bounding box but no classification results?
    We change:
    config_infer_secondary_carcolor.txt and config_infer_secondary_carmake.txt and config_infer_secondary_vehicletypes.txt
    input-object-min-width=0
    input-object-min-height=0
    Unlike the document description:
    The object is inferred upon only when it is first seen in a frame (based on its object ID) or when the size (bounding box area) of the object increases by 20% or more.
    This is our "debug info: file:
    linux@linux-MS-7A15:~/tool/deepstream4.0/deepstream/deepstream_sdk_v4.0_x86_64/sources/objectDetector_Yolo/debuginfo$ cat debug.txt | grep -i ‘text_params: Car 646’
    text_params: Car 646
    text_params: Car 646 silver sedan lexus
    text_params: Car 646 white lexus suv
    text_params: Car 646 lexus suv white
    text_params: Car 646 lexus suv white
    text_params: Car 646 suv lexus white

(exclude end)

  1. On the basis of source code(use [primary-gie][track][secondary-gie0][secondary-gie1][secondary-gie2]), how to get the classification results for each bounding box of each frame?
    What configurations should we modify to achieve our functions?

Hi,

1.
We don’t apply secondary GIE to each input frame.
If the tracker doesn’t be enabled, the NvDsBatchMeta value(wwith text information) may not be set correctly.

2.
Please find this page for the Deepstream pipeline.
https://docs.nvidia.com/metropolis/deepstream/4.0/dev-guide/index.html#page/DeepStream_Development_Guide%2Fdeepstream_app_architecture.html
The secondary GIE is not applied to each input frame.

3.
Since deepstream-app is open source, you can update the pipeline to your requirement directly.
/opt/nvidia/deepstream/deepstream-4.0/sources/apps/sample_apps/deepstream-app

Thanks.

@AastaLLL While this question is old, I am facing some similar problems. I have a pipeline with 1 detector and 3 classifiers, running on Deepstream Version 5. My pipeline builds on the pipeline from DeepStream Test 2 Python Application. I am not able to run more than 2 classifiers at the same time on the detected objects. All the classifiers work on the same class of object detected on first primary inference engine. I am also using a tracker.

Primary Engine Detection
|---------------- Classifier 1 on Objects of Class 0
|---------------- Classifier 2 on Objects of Class 0
|---------------- Classifier 3 on Objects of Class 0

I am using 5 separate config files:
1 - Detector
3 - 1 for each of the classifiers
1 - Tracker.

Classifier Async Mode is disabled.

I am not able to see more than two classifiers’ output at the same time. How can I apply each of the three secondary classifier engines on each bounding box of interest?