./rtSafe/safeContext.cpp (133) - Cudnn Error in configure: 7 (CUDNN_STATUS_MAPPING_ERROR)

Hi,

I have converted my Pytorch model to ONNX and to TRT(with Custom Plugins).
But I am using another TRT(2) Model whose output will be input to the above TRT(1) Model. I am serving those two TRT models using Triton Server on Jetson Nano, I am sending the request from my Laptop to Jetson Nano, but during the response, I am getting error in the Jetson nano terminal as:

E0311 13:59:57.029723 29688 logging.cc:43] …/rtSafe/safeContext.cpp (133) - Cudnn Error in configure: 7 (CUDNN_STATUS_MAPPING_ERROR)
E0311 13:59:57.030261 29688 logging.cc:43] FAILED_EXECUTION: std::exception

Not sure what is going wrong. what is causing an issue here?
Can you please assist me in resolving this error?

Command I used in Jetson Nano to start the serving of two models:

LD_PRELOAD=“Einsum_op.so RoI_Align.so libyolo_layer.so” ./Downloads/bin/tritonserver --model-repository=./models --min-supported-compute-capability=5.3 --log-verbose=1

Model Info:

  1. Yolov3-spp-ultralytics.
  2. SlowFast.(with two Custom Plugins).

The bounding boxes outputed by the 1st model will go as the input to the 2nd model(Slowfast).

Looking forward to the reply

Thanks,
Darshan

Hi , UFF and Caffe Parser have been deprecated from TensorRT 7 onwards, hence request you to try ONNX parser.
Please check the below link for the same.

Thanks!

Hi @NVES,

I have converted the model using trtexec utility. Whether it uses ONNX Parser for the ONNX Model to TRT Model Conversion?

Thanks

Hi @darshancganji12,

trtexec is a tool to quickly utilize TensorRT without having to develop your own application. It’s useful for generating serialized engines from models and benchmarking networks on random data. Here you can find more info.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec

Thank you.