We’re running a YOLOv5s model converted to a TensorRT engine in a Triton Server. Sending the same image to the Triton server for inference returns different results each time. Is this normal or is there something we could do to make it deterministic? We are using a Jetson Nano 2GB and built the TensorRT engine on the Jetson Nano itself.
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| TensorRT and Triton Server - different results each time | 2 | 550 | October 12, 2021 | |
| Question about TensorRT reproducibility on different architectures | 3 | 999 | October 12, 2021 | |
| Non-deterministic TensorRT engine building | 3 | 678 | March 10, 2021 | |
| Yolov5 + TensorRT results seems weird on Jetson Nano 4GB | 5 | 2143 | January 24, 2022 | |
| Is TensorRT inference deterministic/reproducibile? | 5 | 2893 | October 12, 2021 | |
| [TensorRT] engine with trt file infer the same result | 5 | 1443 | November 11, 2022 | |
| Yolov5 + TensorRT results seems weird on Jetson Nano 4GB | 8 | 4723 | January 18, 2022 | |
| Different TensorRT inference results for the same input | 2 | 1559 | October 23, 2018 | |
| Run to run variation with TensorRT | 1 | 439 | September 2, 2022 | |
| Trtexec generates different engines when using the same platform/machine with the same onnx model | 3 | 1248 | March 29, 2022 |