We’re running a YOLOv5s model converted to a TensorRT engine in a Triton Server. Sending the same image to the Triton server for inference returns different results each time. Is this normal or is there something we could do to make it deterministic? We are using a Jetson Nano 2GB and built the TensorRT engine on the Jetson Nano itself.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
TensorRT and Triton Server - different results each time | 2 | 485 | October 12, 2021 | |
Question about TensorRT reproducibility on different architectures | 3 | 895 | October 12, 2021 | |
TensorRT same YOLO model with different weights | 1 | 487 | May 12, 2023 | |
Yolov5 + TensorRT results seems weird on Jetson Nano 4GB | 5 | 2044 | January 24, 2022 | |
The same model produces different results in TensorRT8 and TensorRT7 | 6 | 408 | November 22, 2022 | |
Same tensorRT code get different result | 10 | 2167 | July 23, 2019 | |
Yolov5 + TensorRT results seems weird on Jetson Nano 4GB | 8 | 4581 | January 18, 2022 | |
How to perform inference using a serialized TensorRT engine (*.plan) on Jetson Nano? | 2 | 908 | September 7, 2019 | |
[TensorRT] engine with trt file infer the same result | 5 | 1310 | November 11, 2022 | |
Nvinferserver inference with onnx returns wrong predictions | 3 | 136 | June 13, 2024 |