Recompiling the kernel for enabling docker containers on the DRIVE device

Please provide the following info (tick the boxes after creating this topic):
Software Version
DRIVE OS 6.0.8.1
DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other

Target Operating System
Linux
QNX
other

Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other

SDK Manager Version
1.9.3.10904
other (2.0.0.11405)

Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other

I have a question regarding running docker in the DRIVE. I came across the issue posed here ( [BUG] failed to start docker container in orin target with error: failed to create endpoint on network bridge, operation not supported ) which is exactly the same issue I am facing. However, I am stuck at the very first step of the solution.

the solution in that post suggests to compile the kernel, and to follow the compilation step in here ( Compiling the Kernel (Kernel 5.15) | NVIDIA Docs ). However, I don’t have the context of running these steps. Do I run this in my host laptop or do I run it is the container in the host laptop? Does it have anything to do with the SDK Manager? Or do I run these step (compiling kernel steps) on DRIVE device?

  • Could someone help me by pointing out which is the case?
  • What is $NV_WORKSPACE in any of the cases?

Please refer to Install DRIVE OS Linux Docker Containers from NGC to set up the docker environment on your host system for compiling the kernel steps mentioned in the topic.

Thanks for the prompt reply. I have followed the steps and was able to flash the DRIVE OS to the device from the host (inside a docker container in host). The docker container has a signature from docker image list command as nvcr.io/drive/driveos-sdk/drive-agx-orin-linux-aarch64-sdk-build-x86 6.0.8.1-0006 d055a29d6559 5 months ago 58GB

What I am not sure what to do next was where do I run the commands provided in the manual webpage (Compling the Kernel (Kernel 5.15) …). Is it inside this docker container? If so, what is the $NV_WORKSPACE?

It should be the default path, /drive, when running the docker container.

Thanks! Another question,

So far I have finished all the steps in here with the necessary modification from another thread. Now I am looking at step 14 and did not know where the SDK Bootburn tool is. Is it inside the same docker container? If so, is there an instruction on how to use this tool to properly burn the image I just built?

I previously used the following command documented in " Set Up DRIVE OS Linux with NVIDIA GPU Cloud (NGC)" to flash the device.

./flash.py /dev/ttyACM1 p3710

But I have not used the bootburn before.

Additional information. The links in this thread are broken where the thread seemed to have a solution.

A detailed device information:

  • Part number: 940-63710-0010-D00
  • Platform: p3710
  • Detailed model number: p3710-10-s05

I followed through the steps and use the simple method to flash ./flash.py /dev/ttyACM1 p3710. The docker container is still unable to start.

Here is the log from the flashing.
log_jn1q76tzm3suvapgxdwlcy8b4kfeh9i0.txt (2.1 KB)

The error in starting the docker reads below:

docker: Error response from daemon: failed to create endpoint dazzling_black on network bridge: failed to add the host (veth6be7bf3) <=> sandbox (vethb5351de) pair interfaces: operation not supported.

I ran with docker run -it arm64v8/ubuntu:20.04.

Please refer to the “Bind and flash the Tegra board” example in the “To flash Yocto built images via bootburn” document page. Run the bind_partitions command with your specified board (“p3710-10-s05” as per your post).