I’ve built a program with TensorRT and it gives me runtime error during inference on Jetson Nano:
cuda/cudaElementWiseLayer.cpp (560) - Cuda Error in execute: 8 (invalid device function)
I compile for CUDA GPU archs: 37 53 60 61 62 72.
- Do you have an idea why that may not work and how to fix it?
- Is there an utility (or env vars or some special mode or anything) that would give me more information about the problem?
Standard examples work fine.
OpenCV is a custom build based on version 4.1.1