Ubuntu 20.04 container with CUDA+Tensorrt in a NVIDIA Jetson Orin Nano with ubuntu 22.04

Hi,

I have a NVIDIA Jetson Orin Nano running ubuntu 22.04. I want to create a container that will run ubuntu 20.04 (because I want to run ros noetic) with CUDA+Tensorrt. When I run:
docker run -it --rm --runtime=nvidia --gpus all ubuntu:latest bash
everything runs great, I can test cuda with nvidia-smi. However, this runs ubuntu 22.04. When I try to run
docker run -it --rm --runtime=nvidia --gpus all ubuntu:20.04 bash
I get the following error:
NVIDIA-SMI couldn’t find libnvidia-ml.so library in your system. Please make sure that the NVIDIA Display Driver is properly installed and present in your system.
Please also try adding directory that contains libnvidia-ml.so to your system PATH.

I also tried running docker with nvidia/cuda:11.8.0-base-ubuntu20.04 and nvcr.io/nvidia/l4t-tensorrt:r8.5.2.2-devel but I get the same error.

When I try the same command in my laptop running ubuntu 20.04, everything works. Can you help?

Thanks!

Hi,

Please check the document of jetson-container below:
https://github.com/dusty-nv/jetson-containers/blob/master/docs/build.md#2404-containers

It by default uses ubuntu:22.04 on JetPack 6.
The recent update has enabled Ubuntu 24.04 support so ideally the procedure for 20.04 should be similar.

Thanks.

Hello,

Thanks a lot for the reply.

I tried this LSB_RELEASE=20.04 CUDA_VERSION=12.6 PYTHON_VERSION=3.12 PYTORCH_VERSION=2.6
jetson-containers build vllm, (and also other versions of CUDA) but get this error:

Testing cudnn (vllm:r36.4.3-cu126-cp312-20.04-cudnn)

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ > TESTING vllm:r36.4.3-cu126-cp312-20.04-cudnn β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

docker run -t --rm --gpus=all --network=host
–env NVIDIA_DRIVER_CAPABILITIES=all
–volume /home/nvidia/jetson-containers/packages/cuda/cudnn:/test
–volume /home/nvidia/jetson-containers/data:/data
–workdir /test
vllm:r36.4.3-cu126-cp312-20.04-cudnn
/bin/bash -c '/bin/bash test.sh

define CUDNN_MAJOR 9
define CUDNN_MINOR 3
define CUDNN_VERSION (CUDNN_MAJOR * 10000 + CUDNN_MINOR * 100 + CUDNN_PATCHLEVEL)
define CUDNN_MAX_SM_MAJOR_NUMBER 9
define CUDNN_MAX_SM_MINOR_NUMBER 0
define CUDNN_MAX_DEVICE_VERSION (CUDNN_MAX_SM_MAJOR_NUMBER * 100 + CUDNN_MAX_SM_MINOR_NUMBER * 10)
Executing: conv_sample
Using format CUDNN_TENSOR_NCHW (for INT8x4 and INT8x32 tests use CUDNN_TENSOR_NCHW_VECT_C)
Testing single precision
====USER DIMENSIONS====
input dims are 1, 32, 4, 4
filter dims are 32, 32, 1, 1
output dims are 1, 32, 4, 4
====PADDING DIMENSIONS====
padded input dims are 1, 32, 4, 4
padded filter dims are 32, 32, 1, 1
padded output dims are 1, 32, 4, 4
CUDNN error at conv_sample.cpp:1233, code=1001 (CUDNN_STATUS_NOT_INITIALIZED) in β€˜cudnnCreate(&handle_)’
test.sh: line 14: 34 Segmentation fault (core dumped) ./conv_sample
[12:26:58] Failed building: vllm

Traceback (most recent call last):
File β€œ/home/nvidia/jetson-containers/jetson_containers/build.py”, line 129, in
build_container(**vars(args))
File β€œ/home/nvidia/jetson-containers/jetson_containers/container.py”, line 192, in build_container
test_container(container_name, pkg, simulate)
File β€œ/home/nvidia/jetson-containers/jetson_containers/container.py”, line 364, in test_container
status = subprocess.run(cmd.replace(NEWLINE, ’ β€˜), executable=’/bin/bash’, shell=True, check=True)
File β€œ/usr/lib/python3.10/subprocess.py”, line 526, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command β€˜docker run -t --rm --gpus=all --network=host --env NVIDIA_DRIVE
R_CAPABILITIES=all --volume /home/nvidia/jetson-containers/packages/cuda/cudnn:/test --volume /home/n
vidia/jetson-containers/data:/data --workdir /test vllm:r36.4.3-cu126-cp312-20.04-cudnn /bin/bash
-c β€˜/bin/bash test.sh’ 2>&1 | tee /home/nvidia/jetson-containers/logs/20250425_121112/test/vllm_r36.4.3-
cu126-cp312-20.04-cudnn_test.sh.txt; exit ${PIPESTATUS[0]}’ returned non-zero exit status 139.

Hi,

Sorry for the missing.

We don’t have an upgradable cuDNN for Ubuntu 20.04.
So you will need to use JetPack 5 and the corresponding package if Ubuntu 20.04 is required.

Thanks.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.