NVIDIA Developer Forums
ONNX Model and Tensorrt Engine gives different output for parseq model
AI & Data Science
Deep Learning (Training & Inference)
TensorRT
onnx
keivan.moazami
July 17, 2023, 4:54am
5
It is a bug in 8.6.1 version of tensorrt. and it will be fixed in the next release.
show post in topic
Related topics
Topic
Replies
Views
Activity
TensorRT gives diffent results than ONNX and Pytorch
TensorRT
8
1569
September 28, 2023
ONNX Model and Tensorrt Engine gives different output
TensorRT
tensorrt
,
onnx
13
5402
June 29, 2022
YOLOv4 TensorRT inference results wayy off, but onnxruntime is not
TensorRT
tensorrt
7
947
June 7, 2022
ONNX Model and Tensorrt Engine gives different output
TensorRT
tensorrt
,
onnx
4
731
March 21, 2023
Tensorrt 8.6 GA : C++ Inference gives diffrence results compared to onnx || pt model python inference
TensorRT
3
635
September 20, 2023
Two machines with very similar SW stack but different GPUs generate different folded model using the Polygraphy tool on the same model onnx input
TensorRT
7
813
June 22, 2022
Inference result gets worse when converting pytorch model to TensorRT model
TensorRT
pytorch
6
1142
January 19, 2022
Same version TensorRT with two methods to convert onnx model,One used trtexec[FAILED] , the other used python[Success]
TensorRT
5
757
October 3, 2023
Keras CRNN model conversion to tensorrt engine error
TensorRT
tensorrt
,
tensorflow
,
onnx
3
958
April 8, 2022
ONNX to TRT Engine conversion Error
TensorRT
tensorrt
8
3717
May 25, 2022