Yolov3 to TensorRT - Segmentation fault on inference


I am currently using the following repository to convert Yolo v3 to TensorRT.

The same repository is present in the NGC container of TensorRT 5.1.

I can successfully convert YOLO to .trt file but getting a segmentation error on inference.

TensorRT version :
CUDA version : 10.1
cuDNN version : 7.4.2
GPU : V100 (AWS)

Error Dump :-

Loading ONNX file from path yolov3.onnx...
Beginning ONNX file parsing
Completed parsing of ONNX file
Building an engine from file yolov3.onnx; this may take a while...
Completed creating Engine
Running inference on image dog.jpg...
Fatal Python error: Segmentation fault

Current thread 0x00007fbb1850b700 (most recent call first):
  File "/workspace/tensorrt/samples/python/yolov3_onnx/../common.py", line 145 in do_inference
  File "onnx_to_tensorrt.py", line 160 in main
  File "onnx_to_tensorrt.py", line 183 in <module>
Segmentation fault (core dumped)

Below is the function where it throws the error:-

def do_inference(context, bindings, inputs, outputs, stream, batch_size=1):
    start = time.time()
    # Transfer input data to the GPU.
    [cuda.memcpy_htod_async(inp.device, inp.host, stream) for inp in inputs]
    # Run inference.
    context.execute_async(batch_size=batch_size, bindings=bindings, stream_handle=stream.handle)
    # Transfer predictions back from the GPU.
    [cuda.memcpy_dtoh_async(out.host, out.device, stream) for out in outputs]
    # Synchronize the stream
    # Return only the host outputs.
    print("=> time: %.4f" %(time.time()-start))
    return [out.host for out in outputs]

Any help is appreciated.

Hello @meetdave06 ,

Are you able to resolve this issue?
I am also encountering the similar problem


Do not build the python module if using TensorRT5 while installing onnx-tensorrt.

I didn’t build the python modules while installing onnx-tensorrt.
I just followed these steps:

mkdir build
cd build
cmake … -DTENSORRT_ROOT=/opt/tensorrt
make -j8
sudo make install

Yes,I am using TensorRT version

I used to do this.

cmake -DCUDA_INCLUDE_DIRS=/usr/local/cuda-10.0/include -DTENSORRT_ROOT=/opt/tensorrt …

Also, I switched to TensorRT 5.0

I switched back to TensorRT but with CUDA 9.0 and CUDNN 7.5, it still doesn’t work.
Maybe I should switch to cuda10.0 like yours !

I encounter the same problem, and my tensorrt version is arm64
Is there any methods to upgrade tensorrt from 4.0 to 5.0. I had tried to download version 5.0 from official website --> https://developer.nvidia.com/nvidia-tensorrt-5x-download
But it seems not have package for arm embedded system

Hello, I am also using the repository https://github.com/xuwanqi/yolov3-tensorrt to convert yolov3.weights to yolov3.onnx and then yolov3.onnx to yolov3.trt.

I am successfully in converting yolov3.weights to yolov3.onnx. Please see the below screenshot from the below link to see success at this point.


Now when I run the below file to build the .trt file from .onnx

python2 onnx_to_tensorrt.py

I got the below error

Loading ONNX file from path ./yolov3.onnx...
Beginning ONNX file parsing
Completed parsing of ONNX file
Building an engine from file ./yolov3.onnx; this may take a while...
Completed creating Engine
Running inference on image dog.jpg...
Segmentation fault (core dumped)

What I have in my system

ii  graphsurgeon-tf                                             5.1.5-1+cuda10.0                             amd64        GraphSurgeon for TensorRT package
ii  libnvinfer-dev                                              5.1.5-1+cuda10.0                             amd64        TensorRT development libraries and headers
ii  libnvinfer-samples                                          5.1.5-1+cuda10.0                             all          TensorRT samples and documentation
ii  libnvinfer5                                                 5.1.5-1+cuda10.0                             amd64        TensorRT runtime libraries
ii  python-libnvinfer                                           5.1.5-1+cuda10.0                             amd64        Python bindings for TensorRT
ii  python-libnvinfer-dev                                       5.1.5-1+cuda10.0                             amd64        Python development package for TensorRT
ii  python3-libnvinfer                                          5.1.5-1+cuda10.0                             amd64        Python 3 bindings for TensorRT
ii  python3-libnvinfer-dev                                      5.1.5-1+cuda10.0                             amd64        Python 3 development package for TensorRT
ii  tensorrt                                                                     amd64        Meta package of TensorRT
ii  uff-converter-tf                                            5.1.5-1+cuda10.0                             amd64        UFF converter for TensorRT package
onnx                                                            1.1.1

I have successfully build ONNX-TensorRT using the repository https://github.com/onnx/onnx-tensorrt, All the steps are successfully. I donot know why I am getting the error Segmentation fault (core dumped)

Note: Please notice that while building ONNX-TensorRT, I used the below command

cmake .. -DTENSORRT_ROOT=/usr/src/tensorrt

Can anyone explain what should I do know to overcome this error.

I have solved the above error by instead of building the yolo tensorrt model from this github repository, I tried with the sample placed at the following location


While building the code I was stuck in the way and got the error. Before showing the error as well as how I overcome it, let me write the steps I have implemented for building the code

python2 -m pip install -r requirements.txt
# code to convert yolo weights to yolo.onxx
python2 yolov3_to_onnx.py
# code to convert yolo.onxx to yolo.trt
python2 onnx_to_tensorrt.py

Error I got after executing the command

python2 yolov3_to_onnx.py
Traceback (most recent call last):
  File "yolov3_to_onnx.py", line 812, in <module>
  File "yolov3_to_onnx.py", line 805, in main
  File "/home/gpu/.local/lib/python2.7/site-packages/onnx/checker.py", line 82, in check_model
onnx.onnx_cpp2py_export.checker.ValidationError: Input size 2 not in range [min=1, max=1].

==> Context: Bad node spec: input: "085_convolutional_lrelu" input: "086_upsample_scale" output: "086_upsample" name: "086_upsample" op_type: "Upsample" attribute { name: "mode" s: "nearest" type: STRING }

How did I resolved this

I got the solution from https://devtalk.nvidia.com/default/topic/1052153/jetson-nano/tensorrt-backend-for-onnx-on-jetson-nano/1 by answer given by sojohans to upgrade onnx to onnx==1.4.1. Please execute the below command

pip2 uninstall onnx
pip2 install onnx==1.4.1 --user

hi sanpreetsingh

i use the onnx=1.4.1 but still meet this problem. any help is appreciated.

Is there some good reason to use python2 ?

br. Markus