TensorRT backend for ONNX on jetson nano

HI,expert

I have Installationed TensorRT backend for ONNX on my jetson nano. But I can’t pass the onnx_backend_test.py .

ONNX backend tests can be run as follows:

hgnan@jetson-nano:~/nas/onnx-tensorrt$ python onnx_backend_test.py
s(Unnamed Layer* 0) [Unary]
(4, 5)
.sssssssssss(Unnamed Layer* 0) [ElementWise]
(4, 5)
.sssssssssssssssssssssssssssssssssssssssssssssssssssssssss(Unnamed Layer* 0) [Shuffle]
(Unnamed Layer* 1) [Pooling]
(Unnamed Layer* 2) [Shuffle]
(3, 31)
...............

.Start downloading model vgg19 from https://s3.amazonaws.com/download.onnx/models/opset_9/vgg19.tar.gz
Done
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:604] Reading dangerously large protocol message.  If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:81] The total number of bytes read was 574674712
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:604] Reading dangerously large protocol message.  If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:81] The total number of bytes read was 574674712

(Unnamed Layer* 35) [Activation]
(Unnamed Layer* 36) [Pooling]
(Unnamed Layer* 37) [Shuffle]
(Unnamed Layer* 38) [Shuffle]
(Unnamed Layer* 39) [Fully Connected]
(Unnamed Layer* 40) [Shuffle]
(Unnamed Layer* 41) [Activation]
(Unnamed Layer* 42) [Shuffle]
(Unnamed Layer* 43) [Shuffle]
(Unnamed Layer* 44) [Fully Connected]
(Unnamed Layer* 45) [Shuffle]
(Unnamed Layer* 46) [Activation]
(Unnamed Layer* 47) [Shuffle]
(Unnamed Layer* 48) [Shuffle]
(Unnamed Layer* 49) [Fully Connected]
(Unnamed Layer* 50) [Shuffle]
(Unnamed Layer* 51) [Shuffle]
(Unnamed Layer* 52) [Softmax]
(Unnamed Layer* 53) [Shuffle]
(1000,)
Killed

Do you have an idea how to fix this? :)

Hi Walter, can you try running sudo tegrastats in the background during this test, and keeping an eye on the memory usage? Perhaps this test program is consuming all the memory available. The “killed” message is typically an indicator that the system is out of memory.

If that’s the case, you can try running your Nano headless (without display attached) to save memory, or mount a swap file.

Hi, Dusty_nv

Thanks your support.
I try running “sudo python onnx_backend_test.py” in headless and swapon mode , but the test failed at the same stage. I have closed necessary system service to save memory.
jtop as follows:

jtop - Raffaello Bonghi
NVIDIA Jetson NANO/TX1 - Jetpack 4.2 [L4T 32.1.0]
CPU1 [|||| schedutil - 18%] 204MHz
CPU2 [|||| schedutil - 16%] 204MHz
CPU3 [|| schedutil - 9%] 204MHz
CPU4 [|||| schedutil - 16%] 204MHz

Mem  [|3.9GB/4.0GB] (lfb 2x512MB)
EMC  [|||                                                                 5%] 204MHz
Imm  [                                                      0.0GB/0.3GB] (lfb 252MB)
Swp  [|                                                  2.9GB/8.2GB] (cached 120MB)

GPU  [                                                                     0%] 76MHz
Dsk  [#                                                               27.1GB/58.4GB]
 APE: 25MHz                   [Sensor]   [Temp]          [Power]     [Cur/Avr]
 Board info:                 AO         44.00C          POM_5V_CPU 301 mW/555 mW
   Name: NANO/TX1            PMIC       100.00C         POM_5V_IN  1852mW/3074mW
   JP: 4.2 [L4T 32.1.0]      thermal    38.00C          POM_5V_GPU  0  mW/754 mW
 NV Power: MAXN - 0          GPU        37.50C
                             PLL        35.50C
                             CPU        38.00C

I can successfully load mobilenetv2-1.0.onnx and resnet18v1.onnx ,but load yolov3.onnx is fail.

hgnan@jetson-nano:~/nas/onnx-tensorrt$ cat  trt-backend.py
import onnx
import onnx_tensorrt.backend as backend
import numpy as np

model = onnx.load("yolov3.onnx")
engine = backend.prepare(model, device='CUDA:0')
input_data = np.random.random(size=(32, 3, 224, 224)).astype(np.float32)
output_data = engine.run(input_data)[0]
print(output_data)
print(output_data.shape)
hgnan@jetson-nano:~/nas/onnx-tensorrt$ python3 trt-backend.py
Traceback (most recent call last):
  File "trt-backend.py", line 6, in <module>
    engine = backend.prepare(model, device='CUDA:0')
  File "/home/hgnan/nas/onnx-tensorrt/onnx_tensorrt/backend.py", line 217, in prepare
    super(TensorRTBackend, cls).prepare(model, device, **kwargs)
  File "/home/hgnan/.local/lib/python3.6/site-packages/onnx/backend/base.py", line 74, in prepare
    onnx.checker.check_model(model)
  File "/home/hgnan/.local/lib/python3.6/site-packages/onnx/checker.py", line 86, in check_model
    C.check_model(model.SerializeToString())
onnx.onnx_cpp2py_export.checker.ValidationError: Nodes in a graph must be topologically sorted, however input 'y3:01' of node:
input: "y3:01" output: "TFNodes/yolo_evaluation_layer_1/Shape_3:0" name: "TFNodes/yolo_evaluation_layer_1/Shape_3" op_type: "Shape"
 is not output of any previous nodes.
hgnan@jetson-nano:~/nas/onnx-tensorrt$

Hi,

We have a TensorRT sample for YOLOv3 with .onnx format.

Would you mind to check it first?
/usr/src/tensorrt/samples/python/yolov3_onnx

Thanks.

1 Like

The conversion of the YoloV3-608 to ONNX does not work because the python script yolov3_to_onnx.py fails with the following errors.
It would be great if you could fix this because I like to convert the ONNX model to TensorRT.

Layer of type yolo not supported, skipping ONNX node generation.
Layer of type yolo not supported, skipping ONNX node generation.
Layer of type yolo not supported, skipping ONNX node generation.
graph YOLOv3-608 (
%000_net[FLOAT, 64x3x608x608]
) initializers (

.
.
.
Building Something …
.
.
.
_conv_weights, %106_convolutional_conv_bias)
return %082_convolutional, %094_convolutional, %106_convolutional
}
Traceback (most recent call last):
File “yolov3_to_onnx.py”, line 812, in
main()
File “yolov3_to_onnx.py”, line 805, in main
onnx.checker.check_model(yolov3_model_def)
File “/home/sh/.local/lib/python2.7/site-packages/onnx/checker.py”, line 86, in check_model
C.check_model(model.SerializeToString())
onnx.onnx_cpp2py_export.checker.ValidationError: Op registered for Upsample is depracted in domain_version of 10

==> Context: Bad node spec: input: “085_convolutional_lrelu” input: “086_upsample_scale” output: “086_upsample” name: “086_upsample” op_type: “Upsample” attribute { name: “mode” s: “nearest” type: STRING }

Hi,

This sample required onnx=1.4.1.
The latest ONNX(1.5.0) is deprecated Upsample layer which causes the error.

Could you try to update your environment to onnx v1.4.1 and try it again?

pip3 uninstall onnx
pip3 install onnx==1.4.1 --user

Thanks.

1 Like

Hi,

Unfortunately, the script shows the same error (Bad node spec …) even after installing the ONNX version 1.4.1:

pip3 show onnx
Name: onnx
Version: 1.4.1
Summary: Open Neural Network Exchange
Home-page: https://github.com/onnx/onnx
Author: bddppq
Author-email: jbai@fb.com
License: UNKNOWN
Location: /home/sh/.local/lib/python3.6/site-packages
Requires: numpy, typing, typing-extensions, six, protobuf
Required-by:

PS:
Thanks for your great work!

Hi

I tried to do :

pip3 uninstall onnx
pip3 install onnx==1.4.1 --user

But got the error (below). Any hints?

Building wheels for collected packages: onnx
WARNING: Building wheel for onnx failed: [Errno 13] Permission denied: ‘/home/soren/.cache/pip/wheels/c5’
Failed to build onnx
Installing collected packages: onnx
Running setup.py install for onnx … error
ERROR: Complete output from command /usr/bin/python3 -u -c ‘import setuptools, tokenize;file=’“'”‘/tmp/pip-install-ynga2nui/onnx/setup.py’“'”‘;f=getattr(tokenize, ‘"’“‘open’”’“‘, open)(file);code=f.read().replace(’”‘"’\r\n’“'”‘, ‘"’"’\n’“'”‘);f.close();exec(compile(code, file, ‘"’“‘exec’”’"‘))’ install --record /tmp/pip-record-wevgeg_d/install-record.txt --single-version-externally-managed --compile --user --prefix=:
ERROR: fatal: not a git repository (or any of the parent directories): .git
running install
running build
running build_py
running create_version
running cmake_build
– Build type not set - defaulting to Release
– The C compiler identification is GNU 7.4.0
– The CXX compiler identification is GNU 7.4.0
– Check for working C compiler: /usr/bin/cc
– Check for working C compiler: /usr/bin/cc – works
– Detecting C compiler ABI info
– Detecting C compiler ABI info - done
– Detecting C compile features
– Detecting C compile features - done
– Check for working CXX compiler: /usr/bin/c++
– Check for working CXX compiler: /usr/bin/c++ – works
– Detecting CXX compiler ABI info
– Detecting CXX compiler ABI info - done
– Detecting CXX compile features
– Detecting CXX compile features - done
CMake Error at CMakeLists.txt:217 (message):
Protobuf compiler not found
Call Stack (most recent call first):
CMakeLists.txt:248 (relative_protobuf_generate_cpp)

-- Configuring incomplete, errors occurred!
See also "/tmp/pip-install-ynga2nui/onnx/.setuptools-cmake-build/CMakeFiles/CMakeOutput.log".
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/tmp/pip-install-ynga2nui/onnx/setup.py", line 328, in <module>
    'backend-test-tools = onnx.backend.test.cmd_tools:main',
  File "/usr/lib/python3/dist-packages/setuptools/__init__.py", line 129, in setup
    return distutils.core.setup(**attrs)
  File "/usr/lib/python3.6/distutils/core.py", line 148, in setup
    dist.run_commands()
  File "/usr/lib/python3.6/distutils/dist.py", line 955, in run_commands
    self.run_command(cmd)
  File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command
    cmd_obj.run()
  File "/usr/lib/python3/dist-packages/setuptools/command/install.py", line 61, in run
    return orig.install.run(self)
  File "/usr/lib/python3.6/distutils/command/install.py", line 589, in run
    self.run_command('build')
  File "/usr/lib/python3.6/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command
    cmd_obj.run()
  File "/usr/lib/python3.6/distutils/command/build.py", line 135, in run
    self.run_command(cmd_name)
  File "/usr/lib/python3.6/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command
    cmd_obj.run()
  File "/tmp/pip-install-ynga2nui/onnx/setup.py", line 203, in run
    self.run_command('cmake_build')
  File "/usr/lib/python3.6/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command
    cmd_obj.run()
  File "/tmp/pip-install-ynga2nui/onnx/setup.py", line 190, in run
    subprocess.check_call(cmake_args)
  File "/usr/lib/python3.6/subprocess.py", line 291, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/bin/cmake', '-DPYTHON_INCLUDE_DIR=/usr/include/python3.6m', '-DPYTHON_EXECUTABLE=/usr/bin/python3', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cpython-36m-aarch64-linux-gnu.so', '/tmp/pip-install-ynga2nui/onnx']' returned non-zero exit status 1.
----------------------------------------

ERROR: Command “/usr/bin/python3 -u -c ‘import setuptools, tokenize;file=’”‘"’/tmp/pip-install-ynga2nui/onnx/setup.py’“'”‘;f=getattr(tokenize, ‘"’“‘open’”’“‘, open)(file);code=f.read().replace(’”‘"’\r\n’“'”‘, ‘"’"’\n’“'”‘);f.close();exec(compile(code, file, ‘"’“‘exec’”’“‘))’ install --record /tmp/pip-record-wevgeg_d/install-record.txt --single-version-externally-managed --compile --user --prefix=” failed with error code 1 in /tmp/pip-install-ynga2nui/onnx/

sojohans

The error log says:
CMake Error at CMakeLists.txt:217 (message):
Protobuf compiler not found

Try to install the Protobuf Compiler:

sudo apt-get install protobuf-compiler

Hi sh2222

Thanks but same error…

Sojohans

Please try:

pip3 install protobuf

Hi, sh2222

Let me check this and update with you later.
Thanks.

Hi AastaLLL

I did try pip3 install protobuf.

But it gave the same error when I installed onnx on jetson nano.

It is the python tensorrt example onnx_yolo I am trying to get to work.

Sojohan

Hi AastaLLL

Did you find a solution to this?

Thanks,

sojohan

The yolov3_onnx sample worked for me once I had installed onnx 1.4.1. Before that I was getting the same check_model error as sh222. Note that the yolov3_to_onnx.py script is only compatible with python2 so onnx should not be installed using pip3. You can check the version of onnx used by python2 with e.g. python2 -m pip freeze | grep onnx

Hi SB_97

Thanks….Will try…

Sojohan

Hi,

We can execute yolov3_to_onnx.py without error.
Here are our steps for your reference:

  • Flash JetPack 4.2.
$ cp -r /usr/src/tensorrt/ .
$ cd tensorrt/samples/python/yolov3_onnx/
$ python2 -m pip install -r requirements.txt
$ python yolov3_to_onnx.py

Thanks.

2 Likes

Installing ONNX 1.4.1 for python2 solved the problem.

BUT!

Do you have an idea how to run the 2nd step: python onnx_to_tensorrt.py to create the TensorRT Engine without running into a killed process due to memory issues?

I solved this by using a USB flash drive as swap:

  • plug in an empty USB stick (pref USB 3.0)
  • sudo fdisk -l to identify the drive letter (it was /dev/sda1 on mine)
  • sudo mkswap /dev/sdx1
  • sudo swapon -p 32767 /dev/sdx1 (where 32767 is the highest priority)
  • cat /proc/swaps to check the new swap is listed

The -p flag may not do what you think. Priority only matters if there are multiple swap files (even then it usually makes sense to leave it the same, since if it is, round robin is used, and that will likely increase swap performance the more disks you add).

https://unix.stackexchange.com/questions/84453/what-is-the-purpose-of-multiple-swap-files

What you may be looking for is vm.swappiness. vm.swappiness is what tells the kernel how aggressively to swap.

Set at 10, the system will only start swapping when it’s almost out of ram. Meaning it performs well and then hits a wall.

A value closer to 90 will swap nearly all the time, making performance worse all the time, but the slowdown will also be more consistent.

Swappiness can be set temporarily with “sysctl vm.swappiness=10” ( recommend 10-90 ) and persistently in /etc/sysctl.conf by adding “vm.swappiness=10” at the end of the file.