Segmentation fault while converting the model with trt8.0.1.6


please read the problem to the end to check all my trials

I am trying to convert my onnx deeplearning model to engine using trt8.0.1.6 through this docker image here ,
The model is giving me this error, verbose is attached:

[11/09/2021-08:47:47] [V] [TRT] Deleting timing cache: 1816 entries, 4264 hits
[11/09/2021-08:47:47] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 2626, GPU 1824 (MiB)
[11/09/2021-08:47:47] [E] Error[1]: Unexpected exception std::bad_alloc
[11/09/2021-08:47:47] [E] Error[2]: [builder.cpp::buildSerializedNetwork::417] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed.)
Segmentation fault (core dumped)

I’ve converted this onnx model using both trt7 and trt7.2 successfully,
and I’ve converted yolo model through the trt8 container successfully,
So both the model and the image are okai,

here’s the command I used while converting the model on RTX3090 machine

trtexec --onnx=crowd_dynamic_1-4.onnx --explicitBatch --saveEngine=cc_trt8.engine --workspace=12288 --fp16 --optShapes=input:1x3x720x1280 --maxShapes=input:1x3x720x1280 --minShapes=input:1x3x720x1280 --shapes=input:1x3x720x1280 --verbose

I used many different workspace sizes from 3 GB up to 20 GB and they’ll give me the same segmentation error


TensorRT Version:
GPU Type: RTX3090
Nvidia Driver Version: 495
CUDA Version: nvcc version tell 11.4, but nvidia-smi tells 11.5
CUDNN Version: couldn’t get it
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): python3.8.10
PyTorch Version (if applicable): tried different (1.3, 1.5, 1.6 and 1.7)
Baremetal or Container (if container which image + tag): this image: Container Release Notes :: NVIDIA Deep Learning TensorRT Documentation

Attached files:

The command I used again:

trtexec --onnx=crowd_dynamic_1-4.onnx --explicitBatch --saveEngine=cc_trt8.engine --workspace=12288 --fp16 --optShapes=input:1x3x720x1280 --maxShapes=input:1x3x720x1280 --minShapes=input:1x3x720x1280 --shapes=input:1x3x720x1280 --verbose

The verbose is attached

verbose.txt (911.4 KB)

Hi , UFF and Caffe Parser have been deprecated from TensorRT 7 onwards, hence request you to try ONNX parser.
Please check the below link for the same.


I am not using caffe or uff,
I am trying to convert my onnx model to engine !!
how is that related to your suggestion ?

I solved it by removing the --fp16 from the conversion command,
I don’t know why but maybe it was too large to cast it to fp 16, so I think now the new problem will be the insufficient accuracy

As you see it’s solved but without --fp16,
But now I want to convert the model with this argument “–fp16”


Could you please share us issue repro ONNX model to try from our end for better debugging.

Thank you.