'nv_onnx_parser_bindings.ONNXParser' object has no attribute 'convert_to_trt_network'

Description

I am following the instructions of developer guide Tensorrt4.0.1and I try to implement the chapter 2.9.4, importing from onnx using python. But the following error is appeared.

AttributeError: ‘nv_onnx_parser_bindings.OnnxConfig’ object has no attribute 'report_parsing_info’

however, it is optional debug option, and therefore, i just comment it and run the code again, the following error happen:

AttributeError: ‘nv_onnx_parser_bindings.ONNXParser’ object has no attribute 'convert_to_trt_network’

could you help me to address these annoyed issues? Thanks in advance

Environment

TensorRT Version: 4.0.16
GPU Type: GT710
Nvidia Driver Version: 384.130
CUDA Version: 8.0.61
CUDNN Version: 7.1.3
Operating System + Version: ubuntu16.04
Python Version (if applicable): 2.7
TensorFlow Version (if applicable): NONE
PyTorch Version (if applicable): 1.0.1.post2
Baremetal or Container (if container which image + tag):

Relevant Files

The code can be downloaded from:

and the onnx file can be downloaded from :

Thanks in advance and looking forward to your reply

After I look through the api document:
https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt_401/tensorrt-api/python_api/workflows/manually_construct_tensorrt_engine.html

I find that there is a difference between api doucnment and developer guide document.

In api document, the command should be :

                  trt_parser.convert_to_trtnetwork()

Instead of shown in the developer guide document ,

                   trt_parser.convert_to_trt_network()

So I have fixed this problem. Thanks. But unfortunately, I meet another issue:

1331: Conv -> [32,16,8]
1332: BatchNormalization -> [32,16,8]
1333: Relu -> [32,16,8]
1334: Conv -> [32,8,4]
1335: BatchNormalization -> [32,8,4]
1336: Relu -> [32,8,4]
1337: Conv -> [128,8,4]
1338: BatchNormalization -> [128,8,4]
1339: Relu -> [128,8,4]
convert_tensor 1342
convert_tensor 1341
convert_tensor 1340
convert_tensor 1251
1340: Conv -> [128,8,4]
1341: BatchNormalization -> [128,8,4]
1342: Relu -> [128,8,4]
1343: Add -> [128,8,4]
[2020-06-09 02:23:53 ERROR] (Unnamed Layer 162) [ElementWise]: elementwise inputs must have same dimensions or follow the broadcasting rules*
1344: Add -> []
python: onnx/utils.h:209: nvinfer1::DimsHW nvonnxparser::get_DimsHW_from_CHW(nvinfer1::Dims): Assertion `dims.nbDims == 3’ failed.
Aborted (core dumped)

I have no idea what’s going on , could you offer me some advice on that? Thanks again.

TensorRT 4.0.1 is very old release, will recommend you to use the latest TRT release.
I used TRT 7.0 and it seems to be working fine:

trtexec # trtexec --onnx=test.onnx --verbose --explicitBatch
[06/09/2020-05:53:46] [I] Average on 10 runs - GPU latency: 3.33862 ms - Host latency: 3.53347 ms (end to end 6.41365 ms)
[06/09/2020-05:53:46] [I] Host latency
[06/09/2020-05:53:46] [I] min: 3.40869 ms (end to end 5.43036 ms)
[06/09/2020-05:53:46] [I] max: 3.7039 ms (end to end 6.72685 ms)
[06/09/2020-05:53:46] [I] mean: 3.52738 ms (end to end 6.4208 ms)
[06/09/2020-05:53:46] [I] median: 3.52332 ms (end to end 6.41663 ms)
[06/09/2020-05:53:46] [I] percentile: 3.6631 ms at 99% (end to end 6.63821 ms at 99%)
[06/09/2020-05:53:46] [I] throughput: 0 qps
[06/09/2020-05:53:46] [I] walltime: 3.00767 s
[06/09/2020-05:53:46] [I] GPU Compute
[06/09/2020-05:53:46] [I] min: 3.23071 ms
[06/09/2020-05:53:46] [I] max: 3.4856 ms
[06/09/2020-05:53:46] [I] mean: 3.33097 ms
[06/09/2020-05:53:46] [I] median: 3.32483 ms
[06/09/2020-05:53:46] [I] percentile: 3.46214 ms at 99%
[06/09/2020-05:53:46] [I] total compute time: 3.00121 s
&&&& PASSED TensorRT.trtexec # trtexec --onnx=test.onnx --verbose --explicitBatch

Thanks

1 Like

Highly appreciate! I will have a try. Thanks a lot

Hello, mates @SunilJB ,

sorry … I am back. I try to use tensorrt7 and follow the instructions again, i get the error below:

Traceback (most recent call last):
File “tensorrt_infer.py”, line 69, in
h_input, h_output, d_input, d_output, stream = alloc_buf(engine)
File “tensorrt_infer.py”, line 38, in alloc_buf
h_input = cuda.pagelocked_empty(trt.volume(engine.get_binding_shape(0)),type=np.float32)
Boost.Python.ArgumentError: Python argument types in
pycuda._driver.pagelocked_empty(int)
did not match C++ signature:
pagelocked_empty(pycudaboost::python::api::object shape, pycudaboost::python::api::object dtype, pycudaboost::python::api::object order=‘C’, unsigned int mem_flags=0)

could you help me to solve this problem? and how did you get the the result shown by you? could you share the code here? thanks again.

if you want to reproduce the error
my code can be downloaded from:

Environment

TensorRT Version : 7.0.0.11
GPU Type : GT710
Nvidia Driver Version : 440.82
CUDA Version : 10.2.89
CUDNN Version : 7.6.5
Operating System + Version : ubuntu16.04
Python Version (if applicable) : 3.7
TensorFlow Version (if applicable) : NONE
PyTorch Version (if applicable) : 1.5.0
Baremetal or Container (if container which image + tag) :

Hey mates,

Fortunately, I solve the above problem by simply changing the

h_input = cuda.pagelocked_empty(trt.volume(engine.get_binding_shape(0)),type=np.float32)

to

h_input = cuda.pagelocked_empty(trt.volume(engine.get_binding_shape(0)),dtype=np.float32)

That is my fault for being careless. sorry for disturbing you guys. and then I still have further problem shown below:

TensorRT Version: 7.0.0.11
[TensorRT] WARNING: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
[TensorRT] ERROR: Parameter check failed at: engine.cpp::enqueue::298, condition: bindings != nullptr
[ 0.01988916 0.0054799 0.03696475 … 0.04337549 0.08516738
-0.02761208]
cost time: 0.00749969482421875

I am the new for using Tensorrt and therefore there might be many upcoming problems for me recently. Hope you can help me to solve these problems. Thanks.

Attachment for code: