Description
When I use tensorrt inference, I get this error:
[02/24/2025-06:43:27] [TRT] [I] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 1454, GPU 3491 (MiB)
[02/24/2025-06:43:27] [TRT] [I] Loaded engine size: 1 MiB
[02/24/2025-06:43:27] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +1, GPU +0, now: CPU 1457, GPU 3491 (MiB)
[02/24/2025-06:43:27] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 1457, GPU 3491 (MiB)
[02/24/2025-06:43:27] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +0, now: CPU 0, GPU 0 (MiB)
[02/24/2025-06:43:27] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1455, GPU 3491 (MiB)
[02/24/2025-06:43:27] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 1455, GPU 3491 (MiB)
[02/24/2025-06:43:27] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +0, now: CPU 0, GPU 0 (MiB)
[02/24/2025-06:43:30] [TRT] [E] 1: [context.cpp::setStream::121] Error Code 1: Cudnn (CUDNN_STATUS_MAPPING_ERROR)
Environment
TensorRT Version: 8.2.1.9
GPU Type: Jetson nano 4GB
CUDA Version: 10.2.300
CUDNN Version: 8.2.1.32
Python Version (if applicable): 3.6
TensorFlow Version: 2.7.0+nv22.1
Here’s the inference code I used:
my_code.txt (2.2 KB)
Please help me see how to fix this error