Description
Transformation model(onnx2trt) failure
Environment
TensorRT Version: 6.0.1.5
GPU Type: Tesla T4
Nvidia Driver Version: 440.95.01
CUDA Version: 10.1
CUDNN Version: 7.6.5
Operating System + Version: Centos7
Python Version (if applicable): 3.6
TensorFlow Version (if applicable): 1.9
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
Solution
How do I solve this problem ? Thanks!
Hi @1965281904,
TRT 6 is an old release and we recommend you try the latest release.
Alternatively you can try your onnx model with the trtexec command.
trtexec --onnx=your_model.onnx --verbose --explicitBatch --shapes=your_input_name:1x3x1216x800
Please share the model if the issue persist.
Thanks!
Hi @AakankshaS,
1.I will try what you suggested, and then Tensorrt7.0.0.11 can complete the transformation of the model1, but model2 cannot.
2.The errors encountered during the model 2 transformation are as follows:
3.how can I solve this question ?
4.I want to know, what’s the difference between the converted engine file and thetrt file?(eg. xxx.engine、xxx.trt)
model2 files:
Hi @1965281904,
A recommendation will be to try the latest TRT release,
I dont think there is any difference.
Both files will be having the same content, only difference will be the filename.
Engine is an TRT model object interface that allows the application to execute inference, but if you want to save it for future usage to a file, you need to serialize it into a buffer and then you can store the buffer to a plan file.
Thanks!
1 Like
Hi @AakankshaS
when I change tensorrt to 7.2,it sucess!
thanks!