Issues with batch size with DINO

• Hardware Platform (Jetson / GPU)dGPU aws T4
• DeepStream Version → 6.4
• TensorRT Version ->8.6

We are running inferencing on videos using Deepstream 6.4 for DINO model
We tried to generate engine files using the DINO ONNX file we generated with TAO toolkit which had batch_size=-1 in the configuration for pytorch to ONNX conversion.

We were able to get the Engine file generated from deepstream pipeline for batch-size=1 in config and run the inference

but when running for higher batch sizes, a model was generated but when running inference I got Error: A batch of multiple frames received from the same source. Set sync-inputs property of streammux to TRUE. Since we are running for a single video we need to have single source so as stated above I added the following with existing streammux configurations


After adding this, Error disappeared but the pipeline didn’t process and gave any output as it did for batch-size 1.

A part of the Model config is as follows.

[property]
gpu-id=0
onnx-file=../../inference_base/dino/dino_model_v1.onnx
labelfile-path=../../inference_base/dino/labels.txt
model-engine-file=../../inference_base/dino/dino_model_v1.onnx_b8_gpu0_fp16.engine
batch-size=8
network-mode=2

My questions are

  1. Do any additional settings need to be done to make it work?
  2. Is the way we generated onnx file will affect it which could cause this behavior?
  3. Can we have increased batch size for a single source

Moving to DeepStream forum.

Could you attach your onnx model to us?

I have included the onnx file here dino_dino_model_v1.onnx - Google Drive

Just from the model alone, there is no problem. Can you provide us with the whole project? We can debug that on our side. If the code is not convenient to be shown to the public, you can just message me from the icon.

Thanks for the update.

We are attempting to predict multiple frames from the same source as we are using a video file. Does deepstream support batched prediction from the same source? and we will try to share the codes with our code to you.
Thanks.

Yes, we can support that. You can try to reproduce your issue with one of our demo deepstream_python_apps. If we can reproduce the problem in our demo, we can also analyze it faster.

Hi. Thank you. We were able to run for different batch sizes now.

I have two further questions.

  1. We see our model always shows a score >=0.3.
    But our threshold values in the model config are set as follows.
cluster-mode=2
[class-attrs-all]
pre-cluster-threshold=0.05
#post-cluster-threshold=0.4
topk=20
nms-iou-threshold=0.5

Since the post-cluster-threshold is removed and the pre-cluster-threshold is 0.05. I think we need to have detections with a score >=0.05 but it only shows a score above 0.3. Are there any other thresholds we are missing here?

  1. Is there any possibility of tuning the model to improve the model accuracy of models used in Deepstream?.

Thank you.

You can refer to our Guide to set the threshold Gst-nvinfer Class-attributes Group Supported Keys.

You can refer to our FAQ to improve the accuracy Debug Tips for DeepStream Accuracy Issue.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.