Jetson Inference DetectNet Problems

I am following this section of the jetson-inference Github (I already raised an issue there but got no reply):

Sadly I am unable to create the new DetectNet model with DIGITS because soon after I click on “Create”, I get an error:

ERROR: error code -11

Please note I am using Caffe 0.16 as 0.15 would not build. Here is my caffe_output.log:

caffe_output.log (328.9 KB)

Please can someone help me get this working? All the previous sections including classification with DIGITS worked fine. :-(

Here is a screenshot with more information on my setup:

Hi,

It looks like you are meeting the similar error as this issue:

Would you mind to give the comment a try first?
Remake the Caffe with this configure:

WITH_PYTHON_LAYER := 1

Thanks.

Hi, yes I already have that in my Makefile.configure.

Just to be sure, did a “make clean” and rebuilt Caffe but the same problem persists. I am using Caffe 0.16 by the way, (0.15 did not build correctly) so I hope this is not an issue. As I say, I have had no problem doing classification in the previous jetson-inference examples.

When I start DIGITS server I get this message so I hope this is not a cause of the problem:
“Couldn’t import dot_parser, loading of dot files will not be possible.”

Hi,

Sorry for the late update.
Instead running “make” comment, would you mind to reset the python binding and try it again.

Here is a similar issue for your reference:

Thanks.

I read your link and reinstalled the protobuf package using pip. I then tried to recreate the detectnet but this failed in the same way as before.

My protobuf is version 3.12.2 so it is greater than the 3.5 that other people say has problems.

What exactly do you mean by “reset the python binding”?

Is the reinstall of protobuf what you mean?

I think you have prematurely marked this as solved. It is definitely not solved from my perspective.

Hi,

Sorry for the late update.

After reinstalling the protobuf, please also recompile the Caffe python library for updating.
Could you give it a try and let us know the following?

Thanks.

I just did a make clean on caffe and rebuilt pycafe etc.

Still get the same error.

The only reason I can see for this error is because I am using Caffe 0.16 instead of the recommended 0.15 however 0.15 won’t even build so I have no choice.

At the bottom of this message is my Caffe makefile.config in the hope it is of some help.

Ultimately, detecting where objects are in an image is not as important as classifying them in my scenario so not the end of the world if we can’t get this working.

## Refer to http://caffe.berkeleyvision.org/installation.html
# Contributions simplifying and improving our build system are welcome!

# cuDNN acceleration switch (uncomment to build with cuDNN).
# cuDNN version 6 or higher is required.
USE_CUDNN := 1

# NCCL acceleration switch (uncomment to build with NCCL)
# See https://github.com/NVIDIA/nccl
# USE_NCCL := 1

# Builds tests with 16 bit float support in addition to 32 and 64 bit.
# TEST_FP16 := 1

# uncomment to disable IO dependencies and corresponding data layers
# USE_OPENCV := 0
# USE_LEVELDB := 0
# USE_LMDB := 0

# Uncomment if you're using OpenCV 3
OPENCV_VERSION := 3

# To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
# CUSTOM_CXX := g++

# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr

# CUDA architecture setting: going with all of them.
CUDA_ARCH := 	-gencode arch=compute_50,code=sm_50 \
		-gencode arch=compute_52,code=sm_52 \
		-gencode arch=compute_60,code=sm_60 \
		-gencode arch=compute_61,code=sm_61 \
		-gencode arch=compute_61,code=compute_61

# BLAS choice:
# atlas for ATLAS
# mkl for MKL
# open for OpenBlas - default, see https://github.com/xianyi/OpenBLAS
BLAS := open
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
BLAS_INCLUDE := /opt/OpenBLAS/include/
BLAS_LIB := /opt/OpenBLAS/lib/

# Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
# BLAS_LIB := $(shell brew --prefix openblas)/lib

# This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
# MATLAB_DIR := /usr/local
# MATLAB_DIR := /Applications/MATLAB_R2012b.app

# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
PYTHON_INCLUDE := /usr/include/python2.7 \
		/usr/lib/python2.7/dist-packages/numpy/core/include
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
# ANACONDA_HOME := $(HOME)/anaconda
# PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
		# $(ANACONDA_HOME)/include/python2.7 \
		# $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include \

# Uncomment to use Python 3 (default is Python 2)
# PYTHON_LIBRARIES := boost_python3 python3.5m
# PYTHON_INCLUDE := /usr/include/python3.5m \
#                 /usr/lib/python3.5/dist-packages/numpy/core/include

# We need to be able to find libpythonX.X.so or .dylib.
PYTHON_LIB := /usr/lib
# PYTHON_LIB := $(ANACONDA_HOME)/lib

# Homebrew installs numpy in a non standard path (keg only)
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
# PYTHON_LIB += $(shell brew --prefix numpy)/lib

# Uncomment to support layers written in Python (will link against Python libs)
WITH_PYTHON_LAYER := 1

# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu/hdf5/serial

# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib

# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1

BUILD_DIR := build
DISTRIBUTE_DIR := distribute

# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1

# The ID of the GPU that 'make runtest' will use to run unit tests.
TEST_GPUID := 0

# enable pretty build (comment to see full commands)
Q ?= @

# shared object suffix name to differentiate branches
LIBRARY_NAME_SUFFIX := -nv

Hi,

We are going to reproduce this issue on our environment.
Would you mind to share your host setup with us? Is it Ubuntu18.04?

Thanks.

1 Like

Thanks for investigating this.

I am using Debian Buster (i.e. stable). Caffe is built from source as mentioned, version 0.16.

I am using Nvidia driver 440.82 with a GTX-1070. Here are some versions of software I am using:

  • CUDA 10.2
  • tensorflow-gpu 1.14.0
  • protobuf 3.12.2
  • CuDNN packages: libcudnn7_7.6.5.32-1+cuda10.2_amd64.deb libcudnn7-dev_7.6.5.32-1+cuda10.2_amd64.deb libcudnn7-doc_7.6.5.32-1+cuda10.2_amd64.deb

Hopefully that is everything you need to know but ask me if I missed off some information.

Hi,

Thanks for your feedback.
We are still checking this issue. Will update more information with you later.

Thanks.

Okay, thanks.

Hi,

Here are some recent status for you.

We can reproduce this issue on a standard Ubuntu-18.04 desktop.
And pass this issue to our internal team for suggestion now.

Thanks.

1 Like

Great to hear that. Thanks for investigating.

Hi,

Sorry to keep you waiting.
This issue comes from Caffe itself rather than DIGITs.
Please upgrade your protobuf library into v3.1.0.
The training job can work correctly after applying this in our environment.

$ sudo -H pip install --upgrade protobuf==3.1.0.post1

Thanks.

Hi, I will be unable to test this suggestion for the next few weeks but I will respond once I am able to. Thanks for looking into this.