Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.2
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) question/bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I’m coming to you with a question about the following situation:
we had DS app operating on multiple streams, standard yolov3 model and slightly customized messages. One of the fields in the messages were object confidence extracted from NvDsObjectMeta. and this was working as expected.
We then replaced the yolov3 with an different object detection model that we trained using Tao. We built necessary libraries and from the sink output confirmed that the performance is as we expected. The messages however report constant confidence of the detected objects of -0.10000000149011612. Which is
- below zero & constant
- lower than our pre-cluster-threshold in config-infer-primary that is set to 0.6 as advised in the docs.
We’re struggling to understand why is that the case. Are tao models populating the same fields when used in DeepStream as Tao models. Are there some other differences that we should be aware of when creating messages based on Tao-trained models? Does it vary by the model? (only interested in object detection models)
Dear admins, feel free to move this post from DS to TAO category if this is where it should be :) I wasn’t sure where it belongs
1 Are you testing with deepstream-app?
2 please make sure your model is ok, did you use the same test source with yolo model? you need to use proper test source for your model.
3 we know nothing about your model, you should know how to input data to model, please make sure your configure is right.
- Yes, I’m testing with an app similar to test5 as it implements the messaging I’m talking about
- Yes, both models were tested using the same source.
- I believe the configuration is right - judging by the visual output from sink - the bounding boxes appear where I expect them, The custom model is frcnn trained with qat. I followed all the steps required to deploy tao model with DeepStream but there is no mention about the need to customize message converter or payload when changing models. There were no other changes made to the app in question. In both cases the model was for object detection, just before it was engine created from yolov3 weights and this time it’s an engine built from etlt using Nvidia training environment.
Would you be able to confirm that using TAO based model in DeepStream populates
Which value does
pre-cluster-threshold from config_infer_primary operate on? How can I confirm that the threshold is enforced as expected?
Folks, I strongly recommend that you update your documentation in FasterRCNN — TAO Toolkit 3.22.02 documentation
You instruct the user to set in
but there is no mention that cluster-mode needs to be set in addition to that.
Yes I know that
cluster-mode is mentioned in nvinfer plugin. But one could think that if it’s not in the sample config for that particular deployment, then it’s not necessary to have it, especially since as far as I see, NMS is the one and only clustering mode for Frcnn.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.