CUDNN Mapping error c++

Hi,

I have tested this on TensorRT 8.0.1 and 8.2.1 on both the Jetson Nano Dev Kit and the Jetson Nano module. When running my example C++ application, I encounter the following error:

ERROR: 1: [hardwareContext.cpp::configure::92] Error Code 1: Cudnn (CUDNN_STATUS_MAPPING_ERROR)

I used trtexec to convert my ONNX model to a TensorRT engine with the following command:

trtexec --onnx=face_landmarks_detector_1x3x256x256.onnx --fp16 --saveEngine=face.engine

My C++ application is compiled with:

g++ -std=c++17 -I/usr/local/cuda-10.2/targets/aarch64-linux/include \
    -I/usr/include/aarch64-linux-gnu -I/usr/lib/aarch64-linux-gnu/tegra/ \
    InferenceModel.cpp main.cpp -o facenet \
    -L/usr/local/cuda-10.2/targets/aarch64-linux/lib/ \
    -lnvinfer -lnvonnxparser -lcudart -lnvinfer_plugin \
    `pkg-config --cflags --libs opencv4`

Interestingly, the same model and commands work without any errors on my RTX 3090 running my c++ code. Additionally, running on the jetson nano:

trtexec --loadEngine=face.engine
does not produce any errors. So problem is with either onnx on jetson nano or c++ compiled for jetson nano?

I have attached a tar archive containing all the required files:

face_landmarks_detector_1x3x256x256.onnx
InferenceModel.cpp
InferenceModel.h
main.cpp
nano_face.engine

Any insights on what might be causing this issue would be greatly appreciated!

Thanks!

trt_error.tar.gz (6.0 MB)

Solved
See attached code changes
trt_fix.tar.gz (6.0 MB)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.