Running Docker Containers Directly on NVIDIA DRIVE AGX Orin

Originally published at:

Learn how to run a few sample applications inside a Docker container running on the target NVIDIA DRIVE AGX hardware.
Not Found – Sorry about that! There was a delay in the release announcement but we’ll get this post back out early next year.

1 Like

Hi, @jwitsoe

Friendly ping, but is this feature of running docker containers directly on drive agx orin available for now 2023-03-10 for the sdk version of 6.0.4?

Thanks and looking forward to your reply!

Hi @lizhensheng,
The feature is available from SDK version 6.0.5 onwards.

1 Like

Thank you for your quick reply!

I know about the runtime-container for jetson agx which is l4t-jetpack NVIDIA L4T JetPack | NVIDIA NGC that containing all sdk provided by nvidia in the jetson container.

Is there a runtime-container for drive agx products like l4t-jetpack?
Is there any tutorial about building the runtime-container in the drive agx orin that can make use of drive os sdk? (perhaps the name would be v5l-drivesdk)


Only host side DRIVE OS SDK docker containers with are available as of now. There is no tutorial as of now, as we recommend keeping the development only on the host machine. With that, currently the blog post will help with running sample applications within docker containers on the target.

1 Like

Hey @kchemudupati , from the blog you sharing, seems we can only run docker container on orin target with runtime mode, is they any possible we can create personal docker image base on the running runtime container, so we can exec it and do some debug? Thanks

You should be able to run the containers without --runtime flag as well. It is used when access to the GPU is required.

You should also be able to build your own docker images and run them just like on a host system.

Thanks. What confused us is that we need to run program on aarch64 architecture ,but we cannot find an aarch64 architecture already with DRIVE OS base docker image in, and we cannot get the critical deb package since we are develper so we cannot build our personal image?

So do you have any good idea about that? Thanks for your favor~

Yes, there are currently no aarch64 architecture DRIVE OS base docker images, as we currently recommend the building and the development to be done on the host machine itself.

You should be able to mount the targetfs to an aarch64 QEMU environment on host machine to build your image there, or you can flash the target and build your image directly on the target itself. Additionally, as mentioned in the blog post you should be able to edit the two csv files to make any libraries and drivers available inside the container.

1 Like

Hi @kchemudupati, I saw you built the sample code directly on target, how can I do that? I don’t have nvcc on my target.

As there is no any compile-debug toolchain in target-orin-kit, the sample code is cross-compiled in the host machine with deb/docker of DriveSDK development environment.

@kchemudupati am I right?



Do you know any roadmap about this?


The blogpost showed compiling a small CUDA sample directly on the target for simplicity purposes. The ideal recommended method is to cross-compile on the host.

You can run the sample compilation command on the target after flashing the DRIVE AGX Orin with DRIVE OS SDK. You should then be able to find the CUDA toolchain and samples directly on the target at /usr/local/cuda-11.4/ as shown in the blogpost.

  1. There is cuda-dev-toolchain in the target-orin-kit, but this is not enough. And there is no DriveSDK-dev-toolchain in the target-orin-kit, which is only included in host-driveos-docker.
  2. some public cuda-dev-toolchain canbe installed in the target-docker-container, but should always used to debug instead of compiling.

I wonder if you agree with the priciples above? @kchemudupati

If you agree I would have the qst to ensure that any cross-compiled program from host-driveos-docker would running well in the target-orin-docker.

  1. how to ensure the binary campability between host-driveos-docker and target-orin-docker? From the blog I see using of the same distribution of ubuntu should do the job nvidia@tegra-ubuntu:/usr/local/cuda-11.4/samples/0_Simple/cudaNvSci$ sudo docker run --rm --runtime nvidia --gpus all -v $(pwd):$(pwd) -w $(pwd) ubuntu:20.04 ./cudaNvSci , is that enough?



What’s the internal mechnisim of /etc/nvidia-container-runtime/host-files-for-container.d/? Is it something like docker run -v filesystem mapping?


After searching and reading, I know this is the csv mode of nvidia-container-runtime.


Could you help me with this topic when I use the target-docker-container with non-root-user?

[BUG] target-docker-container running cuda-samples require unintended extra permission - DRIVE AGX Orin / DRIVE AGX Orin General - NVIDIA Developer Forums