Jetson Nano - Running OpenPose example gives a cuda check failed

I used this guide to install OpenPose on my Jetson Nano:

after that, I ran this command from the openpose folder:

bash ./scripts/ubuntu/

I can see my webcam turn on, and I can see a small window labelled OpenPost 1.5.0 open. However, after a few moments, I see this following log:

angelo@angelo-desktop:~/openpose$ ./build/examples/openpose/openpose.bin -camera_resolution 640x480 -net_resolution 128x96
Starting OpenPose demo...
Configuring OpenPose...
Starting thread(s)...
Auto-detecting camera index... Detected and opened camera 0.
Auto-detecting all available GPUs... Detected 1 GPU(s), using 1 of them starting at GPU 0.

Error occurred on a thread. OpenPose closed all its threads and then propagated the error to the main thread. Error description:

Cuda check failed (48 vs. 0): no kernel image is available for execution on the device

Coming from:
- src/openpose/net/netCaffe.cpp:reshapeNetCaffe():112
- src/openpose/gpu/cuda.cpp:cudaCheck():42
- src/openpose/net/netCaffe.cpp:reshapeNetCaffe():117
- src/openpose/net/netCaffe.cpp:forwardPass():256
- src/openpose/pose/poseExtractorCaffe.cpp:forwardPass():626
- src/openpose/pose/poseExtractor.cpp:forwardPass():53
- ./include/openpose/pose/wPoseExtractor.hpp:work():107
- ./include/openpose/thread/worker.hpp:checkAndWork():93
- [All threads closed and control returned to main thread]
- src/openpose/utilities/errorAndLog.cpp:checkWorkerErrors():280
- ./include/openpose/thread/threadManager.hpp:stop():243
- ./include/openpose/thread/threadManager.hpp:exec():202
- ./include/openpose/wrapper/wrapper.hpp:exec():424

I’m not sure how to proceed. I know my camera works, I’ve tested it via Cheese, and the imagenet-camera and detectnet-camera examples (I had to set the default camera to 0 instead of -1).

Does anyone know how to go about this issue? Any help is very much appreciated.

It seems that this might be related to the CUDA setup in Caffe?

The Jetson TX2 guide makes you run a script that installs Caffe and OpenPose. Now, what I’m thinking is that the CUDA_ARCH variable wasn’t changed to be compatible with the Nano? This is the following CUDA_ARCH for Jetson Nano:

CUDA_ARCH := -gencode arch=compute_53,code=sm_53 \
             -gencode arch=compute_53,code=compute_53

And I think that this is very much different from the TX2’s.

Right now, I’m not sure how to fix or edit the Makefile.config that the OpenPose script uses. I’ve also had my issues installing Caffe on the Nano as a standalone installation, and the OpenPose for TX2 script file was godsend. But it doesn’t seem to provide flexibility for configuring the Makefile.config.

Any leads are very much appreciated.


There’s a Makefile.config.Ubuntu16_cuda9_JetsonTX2_JetPack33 file in the openpose/scripts/ubuntu folder. I’ll see what I can do here.


YES. Nano is with GPU capacity=53.
Please let us know the following status.


Hello, I tried to update the Makefile.config.Ubuntu16_cuda9_JetsonTX2_JetPack33, and here’s how it looks right now:

## Refer to
# Contributions simplifying and improving our build system are welcome!

# CPU-only switch (comment to build without GPU support).

# uncomment to disable IO dependencies and corresponding data layers
# USE_LMDB := 0

# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
#	You should not set this flag if you will be reading LMDBs with any
#	possibility of simultaneous read and write

# Uncomment if you're using OpenCV 3

# To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
# CUSTOM_CXX := g++

# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr

# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 through *_61 lines for compatibility.
# For CUDA < 8.0, comment the *_60 and *_61 lines for compatibility.
#CUDA_ARCH := -gencode arch=compute_30,code=sm_30 \
#		-gencode arch=compute_35,code=sm_35 \
#		-gencode arch=compute_50,code=sm_50 \
#		-gencode arch=compute_52,code=sm_52 \
#		-gencode arch=compute_60,code=sm_60 \
#		-gencode arch=compute_61,code=sm_61 \
#		-gencode arch=compute_61,code=compute_61
# Deprecated
# CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
# 		-gencode arch=compute_20,code=sm_21 \
# 		-gencode arch=compute_30,code=sm_30 \
# 		-gencode arch=compute_35,code=sm_35 \
# 		-gencode arch=compute_50,code=sm_50 \
# 		-gencode arch=compute_52,code=sm_52 \
# 		-gencode arch=compute_60,code=sm_60 \
# 		-gencode arch=compute_61,code=sm_61 \
# 		-gencode arch=compute_61,code=compute_61

# For Jetson Nano
CUDA_ARCH :=   -gencode arch=compute_53,code=sm_53 \
		-gencode arch=compute_53,code=compute_53

# Uncomment to enable op::Profiler

# DEEP_NET choice:
# caffe for Caffe (default and only option so far)
DEEP_NET := caffe

# Caffe directory
CAFFE_DIR := 3rdparty/caffe/distribute

# Faster GUI display
# OpenPose 3-D Reconstruction
# Eigen directory (Ceres)
EIGEN_DIR := /usr/include/eigen3/
# Spinnaker directory
SPINNAKER_DIR := /usr/include/spinnaker

# Whatever else you find you need goes here.
INCLUDE_DIRS := /usr/local/include /usr/include/hdf5/serial
LIBRARY_DIRS := /usr/local/lib /usr/lib /usr/lib/aarch64-linux-gnu /usr/lib/aarch64-linux-gnu/hdf5/serial

# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib

# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)

BUILD_DIR := build
DISTRIBUTE_DIR := distribute

# Uncomment for debugging. Does not work on OSX due to
# DEBUG := 1

# The ID of the GPU that 'make runtest' will use to run unit tests.

# enable pretty build (comment to see full commands)
Q ?= @

I’ve added in the GPU Capacity 53 in lines 51-53. I’ve also checked the the other lines and they seem to be correct.

I ran: bash ./

After that, I tried to run the openpose bin example again, but it still have the same Cuda error.

Now, I have two options.

  1. Double check everything and make sure that the Makefile.config is correct.
  2. Figure out how to install Caffe on Jetson Nano. This is an option because there’s a script called and I can just run that after installing Caffe.

Does anyone know how to do one of two, or both of those things? Any leads are appreciated.

I have same question after installing caffe on my jeston nano.
when I run the test (or something else), it shows that ‘check failed error: no kernel image is available for…’
PS: I followed the tutorial on this page : , failed.

HI, angelo_v

I have same error in my nano when compile using ‘./scripts/ubuntu/’ . When try compile using ‘./scripts/ubuntu/install_openpose’, this error no longer appears.

I guess that openpose_JetsonTX2_JetPack3.3 is not suitable for JetPack 4.2.

Hello, Walter_LIU

Where did you find the script file ‘install_openpose’? I am also currently struggling with attempts to run openpose on Jetson Nano.

  1. vi Makefile.config # see attachment
  2. cmake -DCUDA_ARCH_BIN="53" -DCUDA_ARCH_PTX="53" -DUSE_CUDNN=1 .
  3. make -j4

OPENOPSE,cmake,make.txt (60.6 KB)

Makefile.config for openopse.txt (3.42 KB)

How did you install caffe?


Here is a example to build Caffe on the Jetson platform.

The only difference is that Nano GPU architecture is sm=5.3.
Please update the CUDA_ARCH to this:

CUDA_ARCH := -gencode arch=compute_53,code=sm_53 \
             -gencode arch=compute_53,code=compute_53