What are the steps to get up and running with dockerized GPU containers?

Assume that you’ve just flashed the jetson orin agx with the latest version of JP such that R36.4 is the current version. What image do I need to extend (e.g. FROM xx) in my Dockerfile to be able to utilize the gpu with pytorch? I have the nvidia container runtime installed and cuda 12.6.

Hi,
Here are some suggestions for the common issues:

1. Performance

Please run the below command before benchmarking deep learning use case:

$ sudo nvpmodel -m 0
$ sudo jetson_clocks

2. Installation

Installation guide of deep learning frameworks on Jetson:

3. Tutorial

Startup deep learning tutorial:

4. Report issue

If these suggestions don’t help and you want to report an issue to us, please attach the model, command/step, and the customized app (if any) with us to reproduce locally.

Thanks!

Hi,

You can try nvcr.io/nvidia/pytorch:24.12-py3-igpu which already has PyTorch preinstalled.
Please make sure the container with igpu or l4t tag.

Thanks.

Maybe this is just semantics, but what I think you want to install is nvidia-container-toolkit instead of nvidia-container-runtime.

The nvidia-container-toolkit docs webpage has a very good overview, installation method and examples of running dockerized gpu containers on Jetson.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.