- please convert & test the model with batch size 30, wondering the theoretical inference performance.
- please convert & test the model with batch size 60, wondering if the inference performance will improve with higher batch-size.
I remember I tried this once before, and the fps decreased. But let me do it again.
I would like the log of “trtexec --loadEngine=saved.engine --fp16” to check the inference performance.
Okay, to sum it up.
1- I’ll convert the model with batch size 30.
2- I’ll convert the model with batch size 60.
3- Test both versions
4- Run this command trtexec --loadEngine=saved.engine --fp16
on the resultant engines and dump the logs?
There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks.
yes, please share the logs.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.