Solved problem with JP 4.4 (L4T R32-4.2) and cuDNN availability inside L4T docker container

After updating the JN to JP 4.4 (L4T R32-4.2) via apt I looked deeper into cudnn.csv. IMHO this file was not build properly. It was in DOS format and the entries was not ordered properly.

Another problem I encountered was the indirection to /etc/alternatives, this was even not done right for all includes and
cuDNN 8 looks different than before, it was splitted into several includes and libraries, so I couldn’t compile openCV properly stuck with cuDNN 7.6.3.

I gues the cvs file must looks like:

lib, /usr/include/aarch64-linux-gnu/cudnn_adv_infer_v8.h
sym, /usr/include/aarch64-linux-gnu/cudnn_adv_infer.h
sym, /etc/alternatives/cudnn_adv_infer_h

other than that it was written with 0xOD,0xOA not 0x0A

Here my changes:

  1. Install dos2unix tool with sudo apt install dos2unix
  2. Change directory with cd /etc/nvidia-container-runtime/host-files-for-container.d
  3. Convert cudnn.csv into newline format with dos2unix cudnn.csv
  4. Revise entries
  5. Open an L4T docker container to proof the changes
  6. Redo step 3, 4 and 5 to refine entries some are missing

NOTE: Keep in mind everythings goes through /etc/alternatives, that means
[sym, …] /etc/alternatives/libcudnn_adv_infer_so -> /usr/lib/aarch64-linux-gnu/

[sym, …] -> /etc/alternatives/libcudnn_adv_infer_so
[sym, …] ->
[lib, …]

lib, /usr/lib/aarch64-linux-gnu/
sym, /usr/lib/aarch64-linux-gnu/
sym, /usr/lib/aarch64-linux-gnu/
sym, /etc/alternatives/libcudnn_adv_infer_so

NOTE: If the sym or link entry is not right nothing happen, that means the file will not be propagated into the L4T docker container.


Which container do you use?

For JetPack4.4, please use our container with r32.4.2 flag.


is the cudnn presented in the latest container l4t r.32.4.4?
opencv cmake seems having hard times finding it
seems the solution is to link them manually though

#RUN ln -s /usr/include/aarch64-linux-gnu/cudnn_v8.h /usr/include/cudnn.h && \
#    ln -s /usr/include/aarch64-linux-gnu/cudnn_version_v8.h /usr/include/cudnn_version.h && \
#    ln -s /usr/include/aarch64-linux-gnu/cudnn_backend_v8.h /usr/include/cudnn_backend.h && \
#    ln -s /usr/include/aarch64-linux-gnu/cudnn_adv_infer_v8.h /usr/include/cudnn_adv_infer.h && \
#    ln -s /usr/include/aarch64-linux-gnu/cudnn_adv_train_v8.h /usr/include/cudnn_adv_train.h && \
#    ln -s /usr/include/aarch64-linux-gnu/cudnn_cnn_infer_v8.h /usr/include/cudnn_cnn_infer.h && \
#    ln -s /usr/include/aarch64-linux-gnu/cudnn_cnn_train_v8.h /usr/include/cudnn_cnn_train.h && \
#    ln -s /usr/include/aarch64-linux-gnu/cudnn_ops_infer_v8.h /usr/include/cudnn_ops_infer.h && \
#    ln -s /usr/include/aarch64-linux-gnu/cudnn_ops_train_v8.h /usr/include/cudnn_ops_train.h && \
#    ls -ll /usr/include/cudnn*


The base container only has OS and CUDA (mounted) pre-installed to save space.
You can check another container to have the cuDNN package, ex. l4t-tensorflow.


Thank you for following up.
We were also able to manually create a container with opencv 450 with cudnn there:
Suggestion to solve Tegra Nvidia-docker issues

Good! Thanks for the feedback.