I install SSD Caffe from JKJungs blog post here:
I install everything correctly and the cmake … and make builds all pass, but when I run the following command “make -j4 runtest” to make the runtests included within the Caffe build it shuts down my TX2 for some strange reason, hard shuts it down. The main reason I am rebuilding and installing Caffe is that I see the same shutdown result from OpenPose which lead to believe that Caffe was not built with the correct aarch for the TX2 device “6.2”, so I am trying to rebuild Caffe to fix this situation but am getting the same result. I am assuming that the device shuts down because its getting some sort of out of memory fault as I’ve seen this in the past which could be ram, but I currently have 5.0GB left in storage space, but I think its referring to GPU memory.
My only other option is to maybe build the master Caffe directly from their website, but I don’t think there is anything wrong with my installation as I get the following from my main builds for caffe, pycaffe
Here is my results from building all, pycaffe and test:
nvidia@nvidia-desktop:~/datacapture/ssd-caffe/build$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 28G 22G 5.0G 81% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 243M 3.7G 7% /dev/shm
tmpfs 3.9G 37M 3.9G 1% /run
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/loop0 20M 20M 0 100% /snap/snapd/4610
/dev/loop1 48M 48M 0 100% /snap/core18/1195
/dev/loop2 48M 48M 0 100% /snap/core18/1149
/dev/loop3 99M 99M 0 100% /snap/guvcview/88
/dev/loop4 43M 43M 0 100% /snap/gtk-common-themes/1313
/dev/loop5 136M 136M 0 100% /snap/gnome-3-28-1804/73
/dev/nvme0n1 939G 20G 872G 3% /home/nvidia/datacapture
tmpfs 787M 128K 786M 1% /run/user/1000
nvidia@nvidia-desktop:~/datacapture/ssd-caffe/build$ make -j4 all
[ 1%] Built target proto
[ 80%] Built target caffe
[ 82%] Built target caffe.bin
[ 82%] Built target create_label_map
[ 83%] Built target test_net
[ 84%] Built target convert_imageset
[ 86%] Built target upgrade_net_proto_binary
[ 86%] Built target extract_features
[ 87%] Built target device_query
[ 87%] Built target finetune_net
[ 87%] Built target convert_annoset
[ 89%] Built target upgrade_solver_proto_text
[ 89%] Built target get_image_size
[ 90%] Built target net_speed_benchmark
[ 91%] Built target compute_image_mean
[ 94%] Built target train_net
[ 94%] Built target upgrade_net_proto_text
[ 95%] Built target classification
[ 97%] Built target ssd_detect
[ 97%] Built target convert_mnist_data
[ 98%] Built target convert_mnist_siamese_data
[100%] Built target convert_cifar_data
[100%] Built target pycaffe
nvidia@nvidia-desktop:~/datacapture/ssd-caffe/build$ make -j4 test
nvidia@nvidia-desktop:~/datacapture/ssd-caffe/build$ make -j4 pycaffe
[ 1%] Built target proto
[100%] Built target caffe
[100%] Built target pycaffe
Here is my Makefile.config:
## Refer to http://caffe.berkeleyvision.org/installation.html
# Contributions simplifying and improving our build system are welcome!
# cuDNN acceleration switch (uncomment to build with cuDNN).
USE_CUDNN := 1
# CPU-only switch (uncomment to build without GPU support).
# CPU_ONLY := 1
# uncomment to disable IO dependencies and corresponding data layers
# USE_OPENCV := 0
# USE_LEVELDB := 0
# USE_LMDB := 0
# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
# You should not set this flag if you will be reading LMDBs with any
# possibility of simultaneous read and write
# ALLOW_LMDB_NOLOCK := 1
# Uncomment if you're using OpenCV 3
OPENCV_VERSION := 3
# To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
# CUSTOM_CXX := g++
# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr
# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 through *_61 lines for compatibility.
# For CUDA < 8.0, comment the *_60 and *_61 lines for compatibility.
CUDA_ARCH := -gencode arch=compute_53,code=sm_53 \
-gencode arch=compute_62,code=sm_62 \
-gencode arch=compute_72,code=sm_72 \
# BLAS choice:
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
BLAS := atlas
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!
# BLAS_INCLUDE := /path/to/your/blas
# BLAS_LIB := /path/to/your/blas
# Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
# BLAS_LIB := $(shell brew --prefix openblas)/lib
# This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
# MATLAB_DIR := /usr/local
# MATLAB_DIR := /Applications/MATLAB_R2012b.app
# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
# PYTHON_INCLUDE := /usr/include/python2.7 \
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
# ANACONDA_HOME := $(HOME)/anaconda
# PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
# $(ANACONDA_HOME)/include/python2.7 \
# Uncomment to use Python 3 (default is Python 2)
PYTHON_LIBRARIES := boost_python-py36 python3.6m
PYTHON_INCLUDE := /usr/include/python3.6m \
# We need to be able to find libpythonX.X.so or .dylib.
PYTHON_LIB := /usr/lib
# PYTHON_LIB := $(ANACONDA_HOME)/lib
# Homebrew installs numpy in a non standard path (keg only)
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
# PYTHON_LIB += $(shell brew --prefix numpy)/lib
# Uncomment to support layers written in Python (will link against Python libs)
WITH_PYTHON_LAYER := 1
# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/aarch64-linux-gnu /usr/lib/aarch64-linux-gnu/hdf5/serial
# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib
# NCCL acceleration switch (uncomment to build with NCCL)
# https://github.com/NVIDIA/nccl (last tested version: v1.2.3-1+cuda8.0)
# USE_NCCL := 1
# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1
# N.B. both build and distribute dirs are cleared on `make clean`
BUILD_DIR := build
DISTRIBUTE_DIR := distribute
# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1
# The ID of the GPU that 'make runtest' will use to run unit tests.
TEST_GPUID := 0
# enable pretty build (comment to see full commands)
Q ?= @
#COMMON_FLAGS += -O3 -ffast-math -flto -march=armv8-a+crypto -mcpu=cortex-a57+crypto
Any assistance would be greatly appreciated as I’ve exhausted why this could be happening. Once I solve this issue the same issue will most likely solve the OpenPose issues as well, so it will be a two fold solution.
I am not able to run jetson_clocks either I get an error stating “cannot access fan” does anyone else get this? I suspect that we are getting this since we are not running the CPU’s and TX2 at maximum capacity, is there a bug in 4.2 with jetson clocks? Can I do something else to get the clocks to run at max?