deepstream4.0/deepstream/deepstream_sdk_v4.0_x86_64/sources/apps/sample_apps/deepstream-app/deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
We ran this directory and three problems arose.
-
On the basis of source code, when we use [primary-gie][secondary-gie0][secondary-gie1][secondary-gie2], but don’t use track plug-in, why only bounding box, no classification?
-
On the basis of source code(use [primary-gie][track][secondary-gie0][secondary-gie1][secondary-gie2]), why the first frame only have bounding box but no classification results?
We change:
config_infer_secondary_carcolor.txt and config_infer_secondary_carmake.txt and config_infer_secondary_vehicletypes.txt
input-object-min-width=0
input-object-min-height=0
Unlike the document description:
The object is inferred upon only when it is first seen in a frame (based on its object ID) or when the size (bounding box area) of the object increases by 20% or more.
This is our "debug info: file:
linux@linux-MS-7A15:~/tool/deepstream4.0/deepstream/deepstream_sdk_v4.0_x86_64/sources/objectDetector_Yolo/debuginfo$ cat debug.txt | grep -i ‘text_params: Car 646’
text_params: Car 646
text_params: Car 646 silver sedan lexus
text_params: Car 646 white lexus suv
text_params: Car 646 lexus suv white
text_params: Car 646 lexus suv white
text_params: Car 646 suv lexus white
(exclude end)
- On the basis of source code(use [primary-gie][track][secondary-gie0][secondary-gie1][secondary-gie2]), how to get the classification results for each bounding box of each frame?
What configurations should we modify to achieve our functions?