Hi, I hava a YOLO model with ONNX format, and I want to transform it to TensorRT model, so my question is TensorRT recommend to use dynamic batch or specified batch ?
I will do inference with maxbatch size = 2, whether it is necessary to use dynamic batch ? which is faster?
Is your application always going to use a batch size of 2?
My application connnect 2 cameras to detect object, so it always use a batch size of 2, but when one of the camera disconnect or other reasons to cause frame cannot be obtained, it will use batch size of 1.
We recommend you use the Dynamic shape with an optimization profile. Please refer following helpful resources.
Looks like your model has static inputs and the also script has some mistakes.
We recommend you to please generate an ONNX model with a dynamic input shape.
And please refer to the following sample and doc to run inference.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.