Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)：jetson NX
• DeepStream Version：5.1
• JetPack Version (valid for Jetson only)：4.5
• Issue Type( questions, new requirements, bugs)：question
when I use a tensorRT model batch-size=8 to infer one video input (input batch-size=1), I find the cost of GPU and infer speed are the same with doing that by a tensorRT model batch-size=1.
ps: both TensorRT model are generated by one same model.
is that normal ?
can you explain the reason ? thanks