CUDA runtime error 8 on Jetson Nano

I’ve built a program with TensorRT and it gives me runtime error during inference on Jetson Nano:

cuda/cudaElementWiseLayer.cpp (560) - Cuda Error in execute: 8 (invalid device function)

I compile for CUDA GPU archs: 37 53 60 61 62 72.

  1. Do you have an idea why that may not work and how to fix it?
  2. Is there an utility (or env vars or some special mode or anything) that would give me more information about the problem?

Thank you!

Standard examples work fine.
TensorRT 5.1.6.1
Cuda 10.0.326-1
Cudnn 7.5.0.56
OpenCV is a custom build based on version 4.1.1

Hi,

You may not setup the environment right, the error code is invalid device.

How do you launch TensorRT engine?
Please noticed that the compiled TensorRT engine cannot be used cross platfrom.
You will need to convert the model into TensorRT on the Nano directly.

Thanks.

Thank you for the answer!

I build the network on the device with the IBuilder::buildCudaEngine call from a ONNX file. But I haven’t converted the model specifically for the Jetson Nano, I use model prepared for regular NVidia GPUs, so that may be the root of the problem.

Ok, I understand you answer now. I did not load a serialized engine built on another platform, I build a new engine from ONNX on the device. So the question still remains. Thanks!

Btw, is it possible that the problem is because of lack of memory? When I track used RAM during launch of my program, it stays around 3.2Gb which is quite near the limit of 4Gb.

Up. Is there a tool to debug such errors?

Hi,

YES. You can monitor the system status with tegrastats.

sudo tegrastats

By the way, you can also check if there is any memory issue ( like leak, invalid, …) with cuda-memcheck.

sudo /usr/local/cuda-10.0/bin/cuda-memcheck ./[app]

Thanks.