Description
When I use TensorRT’s trtexec
command to import and test a simple onnx model, the process will get stuck and the graphics card driver will hang up. What’s more, nvidia-smi
command will show that the No GPU is found. This situation only occurs on my 3080 machine, and there is no problem on 1060, 3060 and 3070. I provide two onnx model that one of them will cause the process stuck during building and another will cause the process stuck during inferencing. Both log information during trtexec
execution process are provided.
Environment
TensorRT Version: 7.2.1 OR 8.2.0
GPU Type: NVIDIA GeForce RTX 3080
Nvidia Driver Version: 470.63
CUDA Version: 11.1 OR 11.4
CUDNN Version: 8.0.5 OR 8.2.1
Operating System + Version: Ubuntu 18.04
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
model: model_crash_during_building.onnx (20.6 KB)
log: trtexec_stuck_when_building (11.8 KB)
model: model_crash_during_inference.onnx (20.6 KB)
log: trtexec_struck_when_inferencing (29.0 KB)
Steps To Reproduce
in 3080 machine, run the following command
./trtexec --onnx=model_crash_during_building.onnx --verbose --explicitBatch
OR
./trtexec --onnx=model_crash_during_inference.onnx --verbose --explicitBatch