Could not initialize cudnn JETSON NANO

• Hardware Platform (Jetson / GPU) Jetson Nano
• DeepStream Version 6.0
Hello i have installed deepstream 6.0 but i’m not being able to run the deepstream app, it seems a problem related to cudnn .

sudo deepstream-app -c source2_1080p_dec_infer-resnet_demux_int8.txt 
Opening in BLOCKING MODE 
Opening in BLOCKING MODE 
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b1gpu0_int8.engine open error
0:00:02.881627187 17122   0x7f1c002330 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b1gpu0_int8.engine failed
0:00:02.882854352 17122   0x7f1c002330 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b1gpu0_int8.engine failed, try rebuild
0:00:02.882911072 17122   0x7f1c002330 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
ERROR: [TRT]: 1: [executionResources.cpp::setTacticSources::156] Error Code 1: Cudnn (Could not initialize cudnn, please check cudnn installation.)
ERROR: Build engine failed from config file
ERROR: failed to build trt engine.
0:00:05.601983510 17122   0x7f1c002330 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:05.603817252 17122   0x7f1c002330 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:05.603872045 17122   0x7f1c002330 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:05.603937046 17122   0x7f1c002330 WARN                 nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:05.603964859 17122   0x7f1c002330 WARN                 nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Config file path: /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/config_infer_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: <main:707>: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/config_infer_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed

Altough the cuda device is detected :

usr/local/cuda/samples/1_Utilities/deviceQuery$ ./deviceQuery 
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "NVIDIA Tegra X1"
  CUDA Driver Version / Runtime Version          10.2 / 10.2
  CUDA Capability Major/Minor version number:    5.3
  Total amount of global memory:                 3964 MBytes (4156514304 bytes)
  ( 1) Multiprocessors, (128) CUDA Cores/MP:     128 CUDA Cores
  GPU Max Clock rate:                            922 MHz (0.92 GHz)
  Memory Clock rate:                             13 Mhz
  Memory Bus Width:                              64-bit
  L2 Cache Size:                                 262144 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 32768
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            Yes
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            No
  Supports Cooperative Kernel Launch:            No
  Supports MultiDevice Co-op Kernel Launch:      No
  Device PCI Domain ID / Bus ID / location ID:   0 / 0 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.2, CUDA Runtime Version = 10.2, NumDevs = 1
Result = PASS

Any idea how to solve this ?

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

• Hardware Platform (Jetson / GPU) : Jetson
• DeepStream Version 6.0.1
• JetPack Version (valid for Jetson only) 4.6.3
• TensorRT Version 8.2.1.9

• Issue Type( questions, new requirements, bugs) : I have installed deepstream 6.0.1 on Jetson nano and when i try to run the sample app. I get this error :
ERROR: [TRT]: 1: [executionResources.cpp::setTacticSources::156] Error Code 1: Cudnn (Could not initialize cudnn, please check cudnn installation.)

• How to reproduce the issue ? Trying to run deepstream-app using the source2_1080p_dec_infer-resnet_demux_int8.txt config file.

could you share the result of “dpkg -l|grep cudnn”? please check if the device meets this requirements.

dpkg -l|grep cudnn
ii  libcudnn8                                             8.2.1.32-1+cuda10.2                        arm64        cuDNN runtime libraries
ii  libcudnn8-dev                                         8.2.1.32-1+cuda10.2                        arm64        cuDNN development libraries and headers
ii  libcudnn8-samples                                     8.2.1.32-1+cuda10.2                        arm64        cuDNN documents and samples
ii  nvidia-container-csv-cudnn                            8.2.1.32-1+cuda10.2                        arm64        Jetpack CUDNN CSV file

and :

dpkg -l | grep nvinfer
ii  libnvinfer-bin                                        8.2.1-1+cuda10.2                           arm64        TensorRT binaries
ii  libnvinfer-dev                                        8.2.1-1+cuda10.2                           arm64        TensorRT development libraries and headers
ii  libnvinfer-doc                                        8.2.1-1+cuda10.2                           all          TensorRT documentation
ii  libnvinfer-plugin-dev                                 8.2.1-1+cuda10.2                           arm64        TensorRT plugin libraries
ii  libnvinfer-plugin8                                    8.2.1-1+cuda10.2                           arm64        TensorRT plugin libraries
ii  libnvinfer-samples                                    8.2.1-1+cuda10.2                           all          TensorRT samples
ii  libnvinfer8                                           8.2.1-1+cuda10.2                           arm64        TensorRT runtime libraries
ii  python3-libnvinfer                                    8.2.1-1+cuda10.2                           arm64        Python 3 bindings for TensorRT
ii  python3-libnvinfer-dev                                8.2.1-1+cuda10.2                           arm64        Python 3 development package for TensorRT

I think it could be realted to the deepstream installation. Should i try to reinstall it ?

please install 4.6.1, which is consistent with the requirements.

Does that mean i need to flash the jetson ?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

yes, please use the same component version with the table above.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.