Hi, bobzeng
I just enabled the cuDNN option as shown in the comment #8.
You can following the building steps in #8 with the TensorRT enabled to generate a better package.
root@2409b48c8d37:/incubator-mxnet# make
Makefile:178: "USE_LAPACK disabled because libraries were not found"
INFO: nvcc was not found on your path
INFO: Using /usr/local/cuda/bin/nvcc as nvcc path
Running CUDA_ARCH: -gencode arch=compute_53,code=sm_53
g++ -std=c++11 -c -DMSHADOW_FORCE_STREAM -Wall -Wsign-compare -O3 -DNDEBUG=1 -I/incubator-mxnet/3rdparty/mshadow/ -I/incubator-mxnet/3rdparty/dmlc-core/include -fPIC -I/incubator-mxnet/3rdparty/tvm/nnvm/include -I/incubator-mxnet/3rdparty/dlpack/include -I/incubator-mxnet/3rdparty/tvm/include -Iinclude -funroll-loops -Wno-unused-parameter -Wno-unknown-pragmas -Wno-unused-local-typedefs -DMSHADOW_USE_SSE=0 -DMSHADOW_USE_F16C=0 -I/usr/local/cuda/include -DMSHADOW_USE_CBLAS=1 -DMSHADOW_USE_MKL=0 -DMSHADOW_RABIT_PS=0 -DMSHADOW_DIST_PS=0 -DMSHADOW_USE_PASCAL=0 -DMXNET_USE_OPENCV=1 -I/usr/include/opencv -fopenmp -DMXNET_USE_OPERATOR_TUNING=1 -DMSHADOW_USE_CUDNN=1 -I/incubator-mxnet/3rdparty/cub -DMXNET_ENABLE_CUDA_RTC=1 -DMXNET_USE_NCCL=0 -DMXNET_USE_LIBJPEG_TURBO=0 -MMD -c src/operator/nn/mkldnn/mkldnn_act.cc -o build/src/operator/nn/mkldnn/mkldnn_act.o
In file included from /incubator-mxnet/3rdparty/mshadow/mshadow/tensor.h:16:0,
from include/mxnet/./base.h:32,
from include/mxnet/operator.h:38,
from src/operator/nn/mkldnn/mkldnn_act.cc:28:
/incubator-mxnet/3rdparty/mshadow/mshadow/./base.h:179:12: fatal error: cudnn.h: No such file or directory
#include <cudnn.h>
^~~~~~~~~
compilation terminated.
Makefile:461: recipe for target 'build/src/operator/nn/mkldnn/mkldnn_act.o' failed
make: *** [build/src/operator/nn/mkldnn/mkldnn_act.o] Error 1
I get a segfault on import as well after installing the wheel in python3.6. It is a fairly fresh install. I’ve followed the documentaiton on Jetson Zoo - eLinux.org to install pytorch, but thats all other that making sure the system was up to date using the package manager. If I import mxnet while using pdb I get the following trace::
/home/lycaass/mximport.py(1)()
→ import mxnet
(Pdb) continue
Traceback (most recent call last):
File “/usr/lib/python3.6/pdb.py”, line 1667, in main
pdb._runscript(mainpyfile)
File “/usr/lib/python3.6/pdb.py”, line 1548, in _runscript
self.run(statement)
File “/usr/lib/python3.6/bdb.py”, line 434, in run
exec(cmd, globals, locals)
File “”, line 1, in
File “/home/lycaass/mximport.py”, line 1, in
import mxnet
File “/usr/local/lib/python3.6/dist-packages/mxnet/init.py”, line 24, in
from .context import Context, current_context, cpu, gpu, cpu_pinned
File “/usr/local/lib/python3.6/dist-packages/mxnet/context.py”, line 24, in
from .base import classproperty, with_metaclass, _MXClassPropertyMetaClass
File “/usr/local/lib/python3.6/dist-packages/mxnet/base.py”, line 213, in
_LIB = _load_lib()
File “/usr/local/lib/python3.6/dist-packages/mxnet/base.py”, line 204, in _load_lib
lib = ctypes.CDLL(lib_path[0], ctypes.RTLD_LOCAL)
File “/usr/lib/python3.6/ctypes/init.py”, line 348, in init
self._handle = _dlopen(self._name, mode)
OSError: libopencv_imgcodecs.so.3.3: cannot open shared object file: No such file or directory
Uncaught exception. Entering post mortem debugging
Running ‘cont’ or ‘step’ will restart the program
/usr/lib/python3.6/ctypes/init.py(348)init()
→ self._handle = _dlopen(self._name, mode)
Update: I am running Jetpack 4.3, which includes OpenCV 4.1.1 – it appears the wheel requires OpenCV 3.3
Update: Using the instructions from post #8, instead of v1.4.x, I built v1.5.x and everything seems to work.
Thank you for the pre-made script, it’s very helpful! I’ve gotten this to work with some tweaks on a fresh Jetpack 4.4 install on the TX2. I can’t quite get it to work on the Nano (fresh sd card image) however… I’ve tried adding up to 12gb swap, set swappiness to 90% but it always runs out of memory on the very last step ‘Building TensorRT Engine’ OOMKiller always kills this before it can complete.
FWIW, I used this script but had to add export LD_LIBRARY_PATH=/usr/local/lib/python3.6/dist-packages/mxnet:$LD_LIBRARY_PATH to the end of ~/.profile to be able to import mxnet. I have a TX2 with a fresh install of Jetpack 4.4. I hope this info helps someone.
Hi there!
I just got a Jetson Nano (flashed using the jetson-nano-jp451-sd-card-image file), and installed MXNet 1.7.0 by using the ‘autoinstall_mxnet.sh’ script.
I was getting the error "Illegal instruction (core dumped)” when importing MXNet, but I fixed it by adding
export OPENBLAS_CORETYPE=ARMV8
to my .bashrc file, as mentioned in this discussion.
However, when running pip list, I can see the mxnet 1.7.0 entry…shouldn’t it be mxnet-cu102, or something similar?
Then, I needed to install GluonCV. I cloned the repo and tried the installation script:
git clone https://github.com/dmlc/gluon-cv
cd gluon-cv && python setup.py install --user
I got some dependencies error when it comes to the matplotlib installation:
Searching for matplotlib
Reading https://pypi.org/simple/matplotlib/
Downloading
Best match: matplotlib 3.4.2
Processing matplotlib-3.4.2.tar.gz
Writing /tmp/easy_install-g5zib6k8/matplotlib-3.4.2/setup.cfg
Running matplotlib-3.4.2/setup.py -q bdist_egg --dist-dir /tmp/easy_install-g5zib6k8/matplotlib-3.4.2/egg-dist-tmp-g0hdz3xh
error: Setup script exited with
Beginning with Matplotlib 3.4, Python 3.7 or above is required.
You are using Python 3.6.9.
but gluoncv seemed to be installed normally, and I installed matplotlib later.
The problem is that now I’m trying to run some example object detection code, and I’m getting the error:
MXNet Error: Build with USE_OPENCV=1 for image io
Didn’t the autoinstall_mxnet.sh enable that? Can I fix it, or should I flash the card back and reinstall everything following the manual procedure in post #8?