Error Code 1: Cudnn (CUDNN_STATUS_MAPPING_ERROR)

Description

When I use tensorrt inference, I get this error:
[02/24/2025-06:43:27] [TRT] [I] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 1454, GPU 3491 (MiB)
[02/24/2025-06:43:27] [TRT] [I] Loaded engine size: 1 MiB
[02/24/2025-06:43:27] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +1, GPU +0, now: CPU 1457, GPU 3491 (MiB)
[02/24/2025-06:43:27] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 1457, GPU 3491 (MiB)
[02/24/2025-06:43:27] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +0, now: CPU 0, GPU 0 (MiB)
[02/24/2025-06:43:27] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1455, GPU 3491 (MiB)
[02/24/2025-06:43:27] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 1455, GPU 3491 (MiB)
[02/24/2025-06:43:27] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +0, now: CPU 0, GPU 0 (MiB)
[02/24/2025-06:43:30] [TRT] [E] 1: [context.cpp::setStream::121] Error Code 1: Cudnn (CUDNN_STATUS_MAPPING_ERROR)

Environment

TensorRT Version: 8.2.1.9
GPU Type: Jetson nano 4GB
CUDA Version: 10.2.300
CUDNN Version: 8.2.1.32
Python Version (if applicable): 3.6
TensorFlow Version: 2.7.0+nv22.1
Here’s the inference code I used:
my_code.txt (2.2 KB)
Please help me see how to fix this error

Hi @lululu991129 ,
Can you pls check the below pointers -

  1. Check cuDNN Version: Ensure compatibility between cuDNN, TensorRT, and CUDA versions you’re using on your system.
  2. Verify Memory Allocation: Ensure that your application has allocated sufficient memory for inference and check for any memory leaks or conflicts that could be causing the issue.
  3. Update Libraries: It’s advisable to update CUDA, cuDNN, TensorRT, and relevant drivers to the latest versions available for the Jetson Nano.
    If the issue persist, pls share your model with us with the repro steps/scripts.

Thanks

Thank you. I have solve the problem.