Cuda install on L4T with JetPack 5.x

Hi,
I understand from here and here that the CUDA and cuDNN are installed in the container when using docker images for Jetpack >= 5, which causes the large increase in image size when upgrading from l4t 32.x

My application uses an image based on nvcr.io/nvidia/l4t-pytorch:r35.1.0-pth1.11-py3, which appears in the Jetpack 5.0.2 section on the l4t-pytorch containers page

JetPack 5.0.2 (L4T R35.1.0)
    l4t-pytorch:r35.1.0-pth1.11-py3
        PyTorch v1.11.0
        torchvision v0.12.0
        torchaudio v0.11.0

Every cuda-related jobs are run within containers, and my application seems to run fine when removing the cuda-related packages after the JetPack install (cuda-toolkit-x-y and libcudnn8). In this scenario, do I need to use Jetpack at all?
I could more simply use l4t 35.1 + nvidia container runtime instead of Jetpack, or are there other benefit to using Jetpack?

Thank you!

Hi @user12474, yes if your application is already compiled and ready-to-deploy, you could rebase against l4t-cuda:runtime which doesn’t have the full CUDA Toolkit/ect in it and is smaller in size. Typically one could use a multi-stage Dockerfile to copy their binaries over from the “build” container into the “deployment” container.

Thank you for you response @dusty_nv .
I still need pytorch at runtime so I don’t think I can change the base image that easily if I understood your suggestion correctly.

The original issue in my question is the double cuda installation (system + container), are there any benefits to installing Jetpack when deploying containerized applications on l4t 35.x ?

Oh okay gotcha - on JetPack 5, no you don’t need CUDA Toolkit on your system since it’s installed inside the containers themselves.

Got it, thanks!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.