Flashing Orin NX using custom docker container

Hi all.

I created a customized docker image based on ubuntu 22.04 and installed all needed packages and libraries. The problem is that, when I try to flash the device through the docker, it stops with NFS related error:

Cleaning up...
Finish generating flash package.
/home/ai-blox/image/Linux_for_Tegra/tools/kernel_flash/l4t_initrd_flash_internal.sh --network usb0 --usb-instance 1-8.1 --device-instance 0 --flash-only --external-device nvme0n1p1 -c "./tools/kernel_flash/flash_l4t_external.xml" --network usb0 blox-orin-nvm external
Entry added by NVIDIA initrd flash tool
/home/ai-blox/image/Linux_for_Tegra/tools/kernel_flash/tmp 127.0.0.1(rw,nohide,insecure,no_subtree_check,async,no_root_squash)
* Stopping NFS kernel daemon
...done.
* Unexporting directories for NFS kernel daemon...
...done.
* Not starting NFS kernel daemon: no support in current kernel.
clnt_create: RPC: Timed out
NFS server is not running
Cleaning up...
Command failed

I know that there is a problem with the NFS daemon. Therefore, I installed all nfs server and nfs client packages in the docker but no success…

This is the command I use to run the docker

docker run -it  --privileged -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix \ 
  -v /dev:/dev -v /var/run/dbus:/var/run/dbus:ro -v /etc/udev/:/etc/udev/ \ 
  -v /run/udev:/run/udev:ro -v /home/ai-blox/Downloads/image:/home/ai-blox/image:rw \
   test_engine bash

Is there any configuration I should do manually to have a correct flashing ?

Thanks

Hi,
We would suggest have a host PC in Ubuntu 20.04 or 22.04. This is the requirement we list in developer guide:

Quick Start — NVIDIA Jetson Linux Developer Guide 1 documentation

@DaneLLL

Thanks for your reply. Because of our internal business logic, we must run the flashing process within the docker.
The problem with my previous problem is just the absence of--network host in my command for running the docker. However, there is another problem as you can see below

***************************************
*                                     *
*  Step 3: Start the flashing process *
*                                     *
***************************************
WARNING: Initrd flash script seems to run inside docker. Consider creating
/run/nvidia_initrd_flash/docker_host_network if you use "--network host" option
/home/ai-blox/image/rootfs and /home/ai-blox/image/tools/kernel_flash/images seems to be not exported by nfs server
You can export them by adding entries into /etc/exports
For examples:
/home/ai-blox/image/tools/kernel_flash/images *(rw,nohide,insecure,no_subtree_check,async,no_root_squash)
/home/ai-blox/image/rootfs *(rw,nohide,insecure,no_subtree_check,async,no_root_squash)
Note that these entries exposes the folders to requests from any IPs

Make sure that exported directories have following permissions and owners:
chmod 755 /home/ai-blox/image/rootfs
chown root.root /home/ai-blox/image/rootfs
chmod 755 /home/ai-blox/image/tools/kernel_flash/images
chown root.root /home/ai-blox/image/tools/kernel_flash/images
Cleaning up...

I tried both /etc/expors and chmod/chown but still I see the same thing every time. Intuitively we should be able to flash under the influence of the docker because SDKManager itself also can flash through the docker.


I should add this point that the way I run docker is as below:

docker run -it  --privileged -v /dev:/dev:rw -v /var/run/dbus:/var/run/dbus:ro  \
                -v /home/ben/Desktop/test/image_orin/Linux_for_Tegra:/home/ai-blox/image \ 
                 --network bridge test_engine bash

prior to this, I used --network host and I had the above problem, but as I used bridge instead of host I don’t have that problem, but instead another problem popped up which is

*                                     *
*  Step 3: Start the flashing process *
*                                     *
***************************************
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...

keeps showing up until it reaches timeout.

Here is the solution:

1- install openssh-server to the docker and make sure you expose port with 22 -p 2222:22
2- the BSP and rootfs directory paths must be identical in host and docker -v /path/to/BSP:/path/to/BSP
3-

docker run -it --rm --privileged --network host -v /dev/bus/usb:/dev/bus/usb/ -v /dev:/dev -v /run/nvidia_initrd_flash/docker_host_network:/run/nvidia_initrd_flash/docker_host_network -v /home/ben/Desktop/test/image_orin/Linux_for_Tegra:/home/ben/Desktop/test/image_orin/Linux_for_Tegra:slave -p 2222:22 test_engine bash

4- expose the paths inside the docker in the /etc/exports
5- exportfs -r
6- sudo service rpcbind restart
7- sudo service nfs-kernel-server restart
7- Now you can manually flash the device

1 Like