PyTorch to Onnx export fails when importing tensorRT

Hi, I want to use the tensorRT library in Python to measure the inference time of a PyTorch model. I start by converting the Pytorch model to Onnx, then I build the tensorRT engine using trtexec and finally I measure the network’s inference latency using a custom function made using tensorRT Python API. When tensorRT is imported before torch in the script, everything works fine. However, when torch is imported before tensorRT, the conversion of the PyTorch model to Onnx using torch’s onnx.export() function crashes with a Windows access violation. How can I solve this issue?

Environment

TensorRT Version:
GPU Type: GeForceRTX 3080 Ti Laptop GPU
Nvidia Driver Version: 522.06
CUDA Version: 11.8
CUDNN Version: 8.6
Operating System + Version: Windows 11
Python Version (if applicable): 3.10.10
PyTorch Version (if applicable): 2.0

Relevant Files

reproduce.py (739 Bytes)
I attached a Python script reproducing the issue.

Steps To Reproduce

  1. Run the script.
  2. When tensorRT is imported before torch, “done” prints and onnx file is generated.
  3. When torch is imported before tensorRT, a Windows access violation happens.

Error message:
Windows fatal exception: access violation

Current thread 0x0000305c (most recent call first):
File “C:\Users\xxxx\anaconda3\envs\test\lib\site-packages\torch\onnx\utils.py”, line 993 in _create_jit_graph
File “C:\Users\xxxx\anaconda3\envs\test\lib\site-packages\torch\onnx\utils.py”, line 1113 in _model_to_graph
File “C:\Users\xxxx\anaconda3\envs\test\lib\site-packages\torch\onnx\utils.py”, line 1548 in _export
File “C:\Users\xxxx\anaconda3\envs\test\lib\site-packages\torch\onnx\utils.py”, line 506 in export
File “C:\Users\xxxx\Documents\pyTorchAndtensorRT\reproduce.py”, line 24 in

1 Like

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hi,
The issue arises before I can call trtexec as it happens when converting from PyTorch to Onnx. I believe it’s an issue related to tensorRT because the order in which I import the Python package matters.

1 Like

We are unable to reproduce this issue. We could successfully run the script in both cases (import torch before and after trt).

Please check that conda setup has enough permissions and is correct.

Thank you.

Hi, thank you for your feedback! I realized that I forgot to add the TensorRT version of my environment. I am on TensorRT version 8.5.3.1. Can you confirm that you are unable to reproduce this behaviour with this additional information ? Thank you

I faced the same issue with 8.6.1, cuda 12.1, pytorch 2.1.2 (and 2.1.1), win11 (up to date), latest drivers (546.33), latest nvidia software, etc.
on my side, my script was silently killed while running torch.onnx.export (no error message) whatever the device (cpu, cuda) and datatype (fp16, float32)

Importing tensorrt before pytorch solved my problem.

I became mad because I spent too much time before identifying the root cause.
Thanks a lot.

1 Like