Please provide complete information as applicable to your setup.
• Hardware Platform (GPU)
• DeepStream Version 6.0
• TensorRT Version 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only) 495.44
• Issue Type( questions, bugs)
I have onnx model export with dynamic batch size. it ok when export to tensorRT directly by nvinfer plugin.
I got this log warning
Cuda failure: status=1 in cuResData at line 348 from application when using engine have batch size > 1 and setup batch size streammux > 1. The program run ok but the logging and large resource be used.
There is no above log when running program with:
- streammux batch size = 1 and engine batch size > 1
- streammux batch size = 1 and engine batch size = 1
- streammux batch size > 1 and engine batch size = 1
but the performance is lower than setup batch > 1 for both streammux and engine.
please help!! thank you.