Trtexec crash on Windows 10 64-bit


I ran trtexec with the attached ONNX model file and this command in a Windows Powershell terminal: .\trtexec.exe --onnx=model.onnx --workspace=4000 --verbose | tee trtexec_01.txt and it crashed without any errors. I’ve also attached the verbose output file trtexec_01.txt with this post, you can see that the output was stopped abruptly before it finished optimizing and running inference.

Upon looking into the Windows event log, I found an Error entry (screenshot attached) with the below details view. There is also an Information entry with more details (attached as well). I’m able to run this command with the same ONNX model on numerous other Windows machines just fine, so there’s something specific to this machine. Any ideas/suggestions would be appreciated. The detailed system information is included below for reference.

 <Event xmlns="">
  <Provider Name="Application Error" /> 
  <EventID Qualifiers="0">1000</EventID> 
  <TimeCreated SystemTime="2022-06-15T19:30:42.0147880Z" /> 
  <Correlation /> 
  <Execution ProcessID="0" ThreadID="0" /> 
  <Security /> 
  <Data /> 
  <Data /> 


TensorRT Version:
GPU Type: RTX 3090
Nvidia Driver Version: 512.95
CUDA Version: 11.6
CUDNN Version: 8.4
Operating System + Version: Windows 10 64-bit Enterprise Version 10.0.19042 Build 19042
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Verbose output file:

trtexec_01.txt (2.8 MB)

Event Log Error Entry:

Event Log Information Entry:

ONNX model file:

model.onnx (14.3 MB)

System information file (collected with msinfo32):

msinfo32.log (247.4 KB)

Steps To Reproduce

Just by running the command above.

Please refer to the below link for Sample guide.

Refer to the installation steps from the link if in case you are missing on anything

However suggested approach is to use TRT NGC containers to avoid any system dependency related issues.

In order to run python sample, make sure TRT python packages are installed while using NGC container.

In case, if you are trying to run custom model, please share your model and script with us, so that we can assist you better.

Hi, yes I’ve shared the model in the original post. And as mentioned this command can run on many other machines so I highly doubt it’s an installation error.


We couldn’t find any error in the logs, are they complete logs? Also please make sure TensorRT is installed correctly. Please reinstall the TensorRT.

Thank you.