import onnx
import onnx_tensorrt.backend as backend
import numpy as np
model = onnx.load(“./New/Data/model.onnx”)
engine = backend.prepare(model, device=‘CUDA:0’)
input_data = np.random.random(size=(1, 3, 112, 112)).astype(np.float32)
output_data = engine.run(input_data)[0]
print(output_data)
print(output_data.shape)
Error___________________
ModuleNotFoundError Traceback (most recent call last)
in
1 import onnx
----> 2 import onnx_tensorrt.backend as backend
3 import numpy as np
4
5 model = onnx.load(“/home/souvik/Documents/TNPL/New/Convert_Mxnet_to_Onnx/model.onnx”)
ModuleNotFoundError: No module named ‘onnx_tensorrt’
Hi,
Did you find out what the error is?
You need to install onnx_tensorrt:
[url]https://github.com/onnx/onnx-tensorrt[/url]
Terveisin, Markus
You need to install onnx_tensorrt:
[url]https://github.com/onnx/onnx-tensorrt[/url]
Terveisin, Markus
Hi,
I did that part but still, I’m getting that error. Do I need to set something in .bashrc?
Hi,
What should i give in cmake … -DTENSORRT_ROOT=
ritesh@mach-1:~$ sudo find / -name tensorrt 2> /dev/null
/usr/share/doc/tensorrt
/usr/lib/python3.6/dist-packages/tensorrt
/usr/src/tensorrt
/home/ritesh/.local/lib/python3.6/site-packages/tensorflow/python/compiler/tensorrt
/home/ritesh/.local/lib/python3.6/site-packages/tensorflow/contrib/tensorrt
ritesh@mach-1:~$ ls -ld /usr/src/tensorrt/
drwxr-xr-x 5 root root 4096 Aug 24 19:37 /usr/src/tensorrt/
ritesh@mach-1:~$ ls /usr/src/tensorrt/
bin data samples
These are the result I’m getting. Can you what you wrote in DTENSORRT_ROOT path?
Hei,
Below is stuff I gathered from various sources and got it working in Jetson Nano.
I’m using sd card image from
https://courses.nvidia.com/courses/course-v1:DLI+C-RX-02+V1/about
Not sure you need all this.
pip3 install --user -r requirements_onnx.txt
requirements_onnx.txt:
numpy
onnx==1.4.1
pycuda==2019.1
wget>=3.2
Pillow>=5.2.0
bistiming
eyewitness==1.0.8
scipy==1.2.1
celery==4.3.0
gevent
line-bot-sdk
mkdir ~/git
cd ~/git
git clone --recursive https://github.com/onnx/onnx-tensorrt.git
export CUDACXX="/usr/local/cuda-10.0/bin/nvcc"
cd onnx-tensorrt
git checkout v5.0 # to get the rigth api for tensorRT 5.06
mkdir build; cd build
cmake .. -DTENSORRT_ROOT=/usr/src/tensorrt -DGPU_ARCHS="53" -DCUDA_INCLUDE_DIRS=/usr/local/cuda-10.0/include
make -j8
sudo make install
in onnx-tensorrt/setup.py change to
EXTRA_COMPILE_ARGS = [
'-std=c++11',
'-DUNIX',
'-D__UNIX',
'-fPIC',
'-O3',
'-ffast-math',
'-flto',
'-march=armv8-a+crypto',
'-mcpu=cortex-a57+crypto',
'-w',
'-fmessage-length=0',
'-fno-strict-aliasing',
'-D_FORTIFY_SOURCE=2',
'-fstack-protector',
'--param=ssp-buffer-size=4',
'-Wformat',
'-Werror=format-security',
'-DNDEBUG',
'-fwrapv',
'-Wall',
'-DSWIG',
]
and
dlinano@jetson-nano:~/git/onnx-tensorrt$ pip3 install --user .
You may also need to install python modules:
https://github.com/onnx/onnx-tensorrt#python-modules
Terveisin, Markus