I was unable to compile and install Mxnet1.5 with tensorrt on the jetson nano,Is there someone have compile it, please help me. Thank you.

Hey Lucas,

Thanks, that solved the problem for me as well.

I ran the mxnet_numpy_performance_test on my Jetson Nano and got some interesting results:

NumPy : Dotted two 512x512 matrices in 0.05 s.
mxnet.numpy : Dotted two 512x512 matrices in 0.08 s.
mxnet.numpy on GPU : Dotted two 512x512 matrices in 0.01 s.

NumPy : Dotted two 1024x1024 matrices in 0.38 s.
mxnet.numpy : Dotted two 1024x1024 matrices in 0.65 s.
mxnet.numpy on GPU : Dotted two 1024x1024 matrices in 0.03 s.

NumPy : Dotted two 2048x2048 matrices in 2.96 s.
mxnet.numpy : Dotted two 2048x2048 matrices in 5.22 s.
mxnet.numpy on GPU : Dotted two 2048x2048 matrices in 0.10 s.

NumPy : Dotted two 4096x4096 matrices in 23.62 s.
mxnet.numpy : Dotted two 4096x4096 matrices in 41.78 s.
mxnet.numpy on GPU : Dotted two 4096x4096 matrices in 0.65 s.

Looking at jtop’s cpu utilization meters it looks like that mxnet.numpy uses only one CPU core while NumPy uses all four. Am I missing here something or is there some room for optimizing compile/build parameters?

Hi @AastaLLL , thanks for providing customers with an MXNet package for Jetson. I am an MXNet community member and was looking forward to fixing the instructions on MXNet page for Jetson support https://mxnet.apache.org/get_started/jetson_setup
Can you please help me with steps you used to build your MXNet package for Jetson? Thanks!

Hi, my first post

When i try to build mxnet with flag USE_TENSORRT = 1 , I can not build it from source, may u try to build it with TENSORRT and tell us how, or another build . It will help a lot

Will we see an mxnet wheel for JetPack 4.4 – that is compiled to use CUDA 10.2?

1 Like


We are going to build a MXNet package with TensorRT support for JetPack4.4.
Will share with you once it’s done.


1 Like

I’m trying to build the MXNet package for JetPack 4.4, but lack for a deb file:

I’m just shocked at how hard (impossible) it is to install mxnet for python3 on the nvidia jetson nano.
I’ve spent all week trying to do this and am super frustrated. Is it really this hard?
I’ve stopped trying to compile now and am installing from the wheel using: sudo pip3 install mxnet-1.6.0-py3-none-any.whl
However import cv fails with:
OSError: libcudart.so.10.0: cannot open shared object file: No such file or directory.
nvcc --version shows I have cuda release 10.2

So it seems I either:

  1. Find a wheel for mxnet for cuda10.2 - but I can’t find one
  2. Compile myself - but I’ve failed so many times. Where is a decent walk through for python3?
  3. Downgrade to cuda10.0 - how do I do that?

Is it really this hard? I don’t want to speak badly of the Jetson Nano but i’ve heard of others switching to Intel boards to get over issues exactly like this. I really don’t want to have to switch.

Any ideas are greatly appreciated. I can’t handle the constant failures much longer

Hi,i met the same problem about OSError: libcudart.so.10.0 and i use this method to solve that:
Copy libmxnet.so from /usr/local/mxnet/ to /usr/local/lib/python3.6/dist-packages/mxnet/.
You can try this method.

Hi @Easit_Mickly,
Thanks for the tip. Unfortunately the error hasn’t changed.
Did you need to do anything else?
When you run nvcc --version do you see release 10.2?

Emm, it’s long time ago, i forget the detail. Or you can try to make a link to libmxnet.so in /usr/local/mxnet/.@ djenny


We have a prebuilt MXNet package for MXNet now.

Install the prebuilt MXNet package directly:

$ wget https://raw.githubusercontent.com/AastaNV/JEP/master/MXNET/autoinstall_mxnet.sh
$ sudo chmod +x autoinstall_mxnet.sh
$ ./autoinstall_mxnet.sh <Nano/TX1/TX2/Xavier>

If you want to build it from source, please check this script:

$ wget https://raw.githubusercontent.com/AastaNV/JEP/master/MXNET/autobuild_mxnet.sh
$ sudo chmod +x autobuild_mxnet.sh
$ ./autobuild_mxnet.sh <Nano/TX1/TX2/Xavier>



Is this a official package - if yes, maybe it is worth an announcement?

These scripts work. However when I try to use resnet50 instead of resnet18 in the TensorRT example, I got error messages:

[libprotobuf ERROR google/protobuf/io/coded_stream.cc:207] A protocol message was rejected because it was too big (more than 67108864 bytes). To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.

Any suggestion about the protobuf size limitation issue?

thank you

Which Jetson platform did you test? It works for me on Jetson Xavier for resnet50_v1 model.

I am using Jetson Xzvier NX. The problem was solved by building and installing a newer protobuf (>3.2).

I would suggest someone will check these scripts from a clean sd card jetpack 4.4 installation. I had heaps of problems, unfortunately I did not document everything. Some things that I remember:

  1. “sudo make install” does not work, as the exports are not recognised. should be broken into “make && sudo make install”
  2. There is a spelling mistake in the include path in line 112 in the build script. inlude should be include.
  3. I had some problems with LD_LIBRARY_PATH and LIBRARY_PATH and PATH not including cuda directories. I guess this is a problem only on a clean install but prehaps you should check it


When following this instruction (line-by-line instead of just simply run the script), I got stuck at line #70

cmake  -DCMAKE_CXX_FLAGS=-I/usr/local/cuda/targets/aarch64-linux/include -DONNX_NAMESPACE=onnx2trt_onnx -DGPU_ARCHS="$gpu_arch" .. && \

where an error occur:

CMake Error at CMakeLists.txt:21 (cmake_minimum_required):
CMake 3.13 or higher is required. You are running version 3.10.2
– Configuring incomplete, errors occurred!

Tried to install cmake (& its dependencies - arm64 deb version) manually but failed. apt refuse to install them because their dep (themselves!!!) does not meet requirement.
Can you support me with this?

My system & setup:
HW: Jetson Nano B01
Jetpack: 4.4.1

Hi nhp12345,

Please help to open a new topic for your issue. Thanks

1 Like

Hi kayccc,

thanks for your support! I’ve solved it already.