Please provide the following info (tick the boxes after creating this topic):
Software Version
DRIVE OS 6.0.10.0
DRIVE OS 6.0.8.1
DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other
Target Operating System
Linux
QNX
other
Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other
SDK Manager Version
2.1.0
other
Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other
Issue Description
Error building docker container on the board
`
Dockerfile (works):
FROM nvidia/cuda:11.8.0-runtime-ubuntu20.04 AS base
CMD ["bash"]
Dockerfile (does not work; run into above error):
FROM nvidia/cuda:11.8.0-runtime-ubuntu20.04 AS base
RUN apt-get update
CMD ["bash"]
To build:
docker build -t simple_docker .
I have followed [BUG] failed to start docker container in orin target with error: failed to create endpoint on network bridge, operation not supported and rebuilt the kernel, so i have able to run the docker and access the GPU:
Error String
/bin/sh -c apt-get update” did not complete successfully: failed to create endpoint n56fp4yfdhyz56470812jbw5x on network bridge: failed to add the host (vethaecfa82) <=> sandbox (veth7c38035) pair interfaces: operation not supported"
Logs
"/bin/sh -c apt-get update” did not complete successfully: failed to create endpoint n56fp4yfdhyz56470812jbw5x on network bridge: failed to add the host (vethaecfa82) <=> sandbox (veth7c38035) pair interfaces: operation not supported"
Dear @ashwin.alapakkamkannan ,
Could you check adding --network host to your command?
I tested it on DRIVE OS 6.0.10 and it worked.
nvidia@tegra-ubuntu:~/testDocker$ sudo docker build --network host -t simple_docker .
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM nvidia/cuda:11.8.0-runtime-ubuntu20.04 AS base
---> 8a1f4b6e4586
Step 2/3 : RUN apt-get update
---> Running in 86a5e7f069c2
Get:1 http://ports.ubuntu.com/ubuntu-ports focal InRelease [265 kB]
Get:2 http://ports.ubuntu.com/ubuntu-ports focal-updates InRelease [128 kB]
Get:3 http://ports.ubuntu.com/ubuntu-ports focal-backports InRelease [128 kB]
Get:4 http://ports.ubuntu.com/ubuntu-ports focal-security InRelease [128 kB]
Get:5 http://ports.ubuntu.com/ubuntu-ports focal/restricted arm64 Packages [1317 B]
Get:6 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 Packages [1234 kB]
Get:7 http://ports.ubuntu.com/ubuntu-ports focal/universe arm64 Packages [11.1 MB]
Get:8 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/sbsa InRelease [1579 B]
Get:9 http://ports.ubuntu.com/ubuntu-ports focal/multiverse arm64 Packages [139 kB]
Get:10 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 Packages [3821 kB]
Get:11 http://ports.ubuntu.com/ubuntu-ports focal-updates/multiverse arm64 Packages [14.9 kB]
Get:12 http://ports.ubuntu.com/ubuntu-ports focal-updates/restricted arm64 Packages [82.9 kB]
Get:13 http://ports.ubuntu.com/ubuntu-ports focal-updates/universe arm64 Packages [1505 kB]
Get:14 http://ports.ubuntu.com/ubuntu-ports focal-backports/universe arm64 Packages [27.8 kB]
Get:15 http://ports.ubuntu.com/ubuntu-ports focal-backports/main arm64 Packages [54.8 kB]
Get:16 http://ports.ubuntu.com/ubuntu-ports focal-security/restricted arm64 Packages [77.0 kB]
Get:17 http://ports.ubuntu.com/ubuntu-ports focal-security/main arm64 Packages [3425 kB]
Get:18 http://ports.ubuntu.com/ubuntu-ports focal-security/multiverse arm64 Packages [8083 B]
Get:19 http://ports.ubuntu.com/ubuntu-ports focal-security/universe arm64 Packages [1214 kB]
Get:20 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/sbsa Packages [1832 kB]
Fetched 25.2 MB in 8s (3254 kB/s)
Reading package lists...
Removing intermediate container 86a5e7f069c2
---> 5db341f9d7fd
Step 3/3 : CMD ["bash"]
---> Running in 87fc760b2c2d
Removing intermediate container 87fc760b2c2d
---> 35495d7d2c69
Successfully built 35495d7d2c69
Successfully tagged simple_docker:latest
Thanks, this worked!
How can i do the same using docker compose? Our stack uses docker compose and we’d prefer to continue using the same
Dear @ashwin.alapakkamkannan ,
Did you check using build network params to use host in yaml file`?
I have tried:
service1:
image: test
build:
context: .
dockerfile: ./drive_orin.ubuntu_base.dockerfile
network: host

Tried with network_mode: host, which errors out as well.

To provide more context, I am trying to build onnxruntime 1.13.1 (with tensorrt engine) in a nvidia/cuda:11.7.1-cudnn8-devel-ubuntu22.04 container.
However I am not able to successfully install tensorrt in this container.
I’ve done similar builds in Jetson ORIN where I could use docker composel pass the host GPU/tensorrt drivers to container during build time and build onnxruntime from source.
- How can I pass host libraries and drivers to containers during build time on DRIVE ORIN?
- And/or, how can I install tensorrt in this container that is to be run on DRIVE ORIN?
Thanks!
Could you check the docker compose usage in other docker related forums as well?
I see from Running Docker Containers Directly on NVIDIA DRIVE AGX Orin | NVIDIA Technical Blog that below is tested and verified on DRIVE. Does it work you?
## Running custom applications inside a target-side Docker container
Two files—devices.csv and drivers.csv—are provided within the RFS flashed onto the board for applications inside Docker containers to access the devices, drivers, and shared libraries. These files each have a list of devices, drivers, and shared libraries needed for an application to run successfully inside the Docker container
The requirement is to run other CUDA+TRT versions on DRIVE than the one provided along with DRIVE OS package?
Thanks,
I don’t mind using the DRIVE CUDA + TRT, but I would need access to these drivers during docker build. To be more specific, onnx runtime is build from source in the Dockerfile, hence it would need access to /usr/lib/aarch64-linux-gnu, /usr/local/cuda, /usr/local/cuda/bin/nvcc from the host during build time.
For comparison, I could do this on Jetson during docker compose, but a similar compose file doesn’t seem to work on DRIVE due to lack of virtual ethernet.
So, essentially, my 2 options are:
- have a docker with it’s own CUDA + TRT versions
- figure out access to host CUDA + TRT for docker build
@SivaRamaKrishnaNV How can i pass host /usr/lib/aarch64-linux-gpu to docker build context?
In the case of Jetson I could use the exact version of docker as the underlying Jetpack and mount the gpu drivers during runtime.
How about mount the docker image(after building like in #3 ) using docker run and then check building onnxruntime from source using the mounted CUDA/TRT paths from target ?
please read the official materials first
Running Docker Containers Directly on NVIDIA DRIVE AGX Orin | NVIDIA Technical Blog
Docker Services | NVIDIA Docs
- for all platform, the program inside nvidia docker container access nvidia gpu driver installed on the host os.
- for drive agx target platform, the program inside target-side docker container access cuda/tensorrt and other driveos libraries installed on the host os too. (that’s different with x86 and jetson platform)
- the
devices.csv and drivers.csv defines how container access the resources in the target machine.
so. you should’t build the image based on a complicated image like nvidia/cuda:11.8.0-runtime-ubuntu20.04, instead, you should use a clean enough image like ubuntu:20.04, and setup *.csv files to define what libraries the container will access in the target machine.
2 Likes