The same repository is present in the NGC container of TensorRT 5.1.
I can successfully convert YOLO to .trt file but getting a segmentation error on inference.
TensorRT version : 5.1.2.2
CUDA version : 10.1
cuDNN version : 7.4.2
GPU : V100 (AWS)
Error Dump :-
Loading ONNX file from path yolov3.onnx...
Beginning ONNX file parsing
Completed parsing of ONNX file
Building an engine from file yolov3.onnx; this may take a while...
Completed creating Engine
Running inference on image dog.jpg...
Fatal Python error: Segmentation fault
Current thread 0x00007fbb1850b700 (most recent call first):
File "/workspace/tensorrt/samples/python/yolov3_onnx/../common.py", line 145 in do_inference
File "onnx_to_tensorrt.py", line 160 in main
File "onnx_to_tensorrt.py", line 183 in <module>
Segmentation fault (core dumped)
Below is the function where it throws the error:-
def do_inference(context, bindings, inputs, outputs, stream, batch_size=1):
start = time.time()
# Transfer input data to the GPU.
[cuda.memcpy_htod_async(inp.device, inp.host, stream) for inp in inputs]
# Run inference.
context.execute_async(batch_size=batch_size, bindings=bindings, stream_handle=stream.handle)
# Transfer predictions back from the GPU.
[cuda.memcpy_dtoh_async(out.host, out.device, stream) for out in outputs]
# Synchronize the stream
stream.synchronize()
# Return only the host outputs.
print("=> time: %.4f" %(time.time()-start))
return [out.host for out in outputs]
I encounter the same problem, and my tensorrt version is 4.0.3.0-1+cuda10.0 arm64
Is there any methods to upgrade tensorrt from 4.0 to 5.0. I had tried to download version 5.0 from official website → [b]https://developer.nvidia.com/nvidia-tensorrt-5x-download[/b]
But it seems not have package for arm embedded system…
Hello, I am also using the repository GitHub - xuwanqi/yolov3-tensorrt to convert yolov3.weights to yolov3.onnx and then yolov3.onnx to yolov3.trt.
I am successfully in converting yolov3.weights to yolov3.onnx. Please see the below screenshot from the below link to see success at this point.
Now when I run the below file to build the .trt file from .onnx
python2 onnx_to_tensorrt.py
I got the below error
Loading ONNX file from path ./yolov3.onnx...
Beginning ONNX file parsing
Completed parsing of ONNX file
Building an engine from file ./yolov3.onnx; this may take a while...
Completed creating Engine
Running inference on image dog.jpg...
Segmentation fault (core dumped)
What I have in my system
ii graphsurgeon-tf 5.1.5-1+cuda10.0 amd64 GraphSurgeon for TensorRT package
ii libnvinfer-dev 5.1.5-1+cuda10.0 amd64 TensorRT development libraries and headers
ii libnvinfer-samples 5.1.5-1+cuda10.0 all TensorRT samples and documentation
ii libnvinfer5 5.1.5-1+cuda10.0 amd64 TensorRT runtime libraries
ii python-libnvinfer 5.1.5-1+cuda10.0 amd64 Python bindings for TensorRT
ii python-libnvinfer-dev 5.1.5-1+cuda10.0 amd64 Python development package for TensorRT
ii python3-libnvinfer 5.1.5-1+cuda10.0 amd64 Python 3 bindings for TensorRT
ii python3-libnvinfer-dev 5.1.5-1+cuda10.0 amd64 Python 3 development package for TensorRT
ii tensorrt 5.1.5.0-1+cuda10.0 amd64 Meta package of TensorRT
ii uff-converter-tf 5.1.5-1+cuda10.0 amd64 UFF converter for TensorRT package
onnx 1.1.1
I have solved the above error by instead of building the yolo tensorrt model from this github repository, I tried with the sample placed at the following location
/usr/src/tensorrt/samples/python/yolov3_onnx
While building the code I was stuck in the way and got the error. Before showing the error as well as how I overcome it, let me write the steps I have implemented for building the code
python2 -m pip install -r requirements.txt
# code to convert yolo weights to yolo.onxx
python2 yolov3_to_onnx.py
# code to convert yolo.onxx to yolo.trt
python2 onnx_to_tensorrt.py
Error I got after executing the command
python2 yolov3_to_onnx.py
Traceback (most recent call last):
File "yolov3_to_onnx.py", line 812, in <module>
main()
File "yolov3_to_onnx.py", line 805, in main
onnx.checker.check_model(yolov3_model_def)
File "/home/gpu/.local/lib/python2.7/site-packages/onnx/checker.py", line 82, in check_model
C.check_model(model.SerializeToString())
onnx.onnx_cpp2py_export.checker.ValidationError: Input size 2 not in range [min=1, max=1].
==> Context: Bad node spec: input: "085_convolutional_lrelu" input: "086_upsample_scale" output: "086_upsample" name: "086_upsample" op_type: "Upsample" attribute { name: "mode" s: "nearest" type: STRING }