• Hardware Platform (Jetson / GPU) → dGPU aws T4 • DeepStream Version → 6.4 • TensorRT Version ->8.6
We are running inferencing on videos using Deepstream 6.4 for DINO model
We tried to generate engine files using the DINO ONNX file we generated with TAO toolkit which had batch_size=-1 in the configuration for pytorch to ONNX conversion.
We were able to get the Engine file generated from deepstream pipeline for batch-size=1 in config and run the inference
but when running for higher batch sizes, a model was generated but when running inference I got Error: A batch of multiple frames received from the same source. Set sync-inputs property of streammux to TRUE. Since we are running for a single video we need to have single source so as stated above I added the following with existing streammux configurations
Just from the model alone, there is no problem. Can you provide us with the whole project? We can debug that on our side. If the code is not convenient to be shown to the public, you can just message me from the icon.
We are attempting to predict multiple frames from the same source as we are using a video file. Does deepstream support batched prediction from the same source? and we will try to share the codes with our code to you.
Thanks.
Yes, we can support that. You can try to reproduce your issue with one of our demo deepstream_python_apps. If we can reproduce the problem in our demo, we can also analyze it faster.
Since the post-cluster-threshold is removed and the pre-cluster-threshold is 0.05. I think we need to have detections with a score >=0.05 but it only shows a score above 0.3. Are there any other thresholds we are missing here?
Is there any possibility of tuning the model to improve the model accuracy of models used in Deepstream?.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks