I converted the onnx model of yolov3-pytorch(1x3x416x416) myself, added it to the jetson_benchmark project, and measured FPS=74 (batch_size=1), but I added batch_size, FPS will increase linearly, (batch_size=8, FPS=614.30) which is obviously wrong, and I would like to ask you how you specifically got onnx, where I might have been wrong,thanks。
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Low FPS on tensorRT YoloV3 Jetson Nano | 2 | 683 | October 15, 2021 | |
Deepstream - Jetson Xavier NX - Mode int8 | 10 | 927 | October 12, 2021 | |
TensorRT has less batching throughput improvement than PyTorch on Jetson Nano | 2 | 440 | September 22, 2020 | |
No speedup on batch size larger than 1 | 4 | 1578 | July 31, 2020 | |
ONNX Runtime Error: fp16 precision has been set for a layer or layer output, but fp16 is not configured in the builder | 3 | 2839 | February 4, 2022 | |
A problem of batchsize when convert from onnx to engine file | 1 | 380 | December 6, 2021 | |
Load ONNX model with batch size | 3 | 1753 | October 12, 2021 | |
Converting FCN8-ResNet18 from Pytorch to TensorRT for inference on Jetson Nano | 3 | 2251 | October 12, 2021 | |
Convert .pth to .onnx on xavier, int8 calibration | 8 | 1199 | October 18, 2021 | |
Onnx -> TensorRT. No speed difference between models | 1 | 474 | June 24, 2021 |