How to backup a jetson nano image file

How to modify a jetson nano image file with pre-installed custom software!
backed up the system files using the “dump” method. I restored it on a 128G TF card, and it failed to boot. Nothing happened, but it returned to normal on a 64G TF card. Is there any way to solve this problem? Both 128G TF card backup and 64GTF card backup can be restored to 64GTF card. Normal use after restoring image

My suggestion would not be to back up what you have wholesale, rather to backup up the difference between the stock image and what you have and apply that to the rootfs. If you backup a system after first boot, many things like the host ssh keys, hostname, etc, which you normally want to be unique, will come along for the ride. This can lead to security and network issues which can be easily avoided by starting from a rootfs instead of a live image.

Instead, you can use your command history from your nano to make a script to apply the changes you’ve made, like installing your software. Then you can either run that on first boot or use a script like this to apply it to a rootfs, mastering a sd card that can be flashed safely onto an unlimited number of Nanos.

Basic instructions:

  1. copy your ~/.bash_history file from your nano somewhere, and use it as a source to generate a new script to install your software. You may wish to create a generic install script for your software and version control it with the rest of your software.

  2. Copy the script to the “rootfs” folder under the Linux_for_Tegra folder wherever SDK Manager downloaded the bundle for your board to (usually ~/nvidia/nvidia_sdk/board_name_here/Linux_for_Tegra/ on your host). There should be a folder for every board and every relase you’ve downloaded with SDK Manager.

  3. If you are starting from a new rootfs (from tarball, for example) instead of using the one provided by SDK Manager by default. you may need to run the “apply_binaries.sh” script within Linux_for_Tegra to install nvidia’s packages (and kernel, and modules) on the rootfs.

  4. Next, download the script linked above and place “enter_chroot.sh” in your Linux_for_Tegra folder.

  5. run “sudo ./enter_chroot.sh rootfs” from within your Linux_for_Tegra folder and you’ll find yourself inside a virtual nano chroot. This is like a virtual machine, but not exactly. All the modifications here will be applied before the first boot which means the things that need to be unique, like the host ssh keys, will still be set up when you first start your nano. You may need to “sudo apt install qemu-user-static” first if the script warns you. You may also wish to configure kvm (steps 1-3 on that link). Your user must also be in the (usually) libvirt group. You can check this with the “groups” command but it should be done automaticaly by step 3 on that page. If your system does not support kvm, qemu may still work but performance will suffer greatly. Qemu will complain if kvm is not setup correctly, so you will know for sure if it’s not. edit: actually, i’m, not sure acceleration works in this qemu mode, but it’s still fast enough to run apt operations.

  6. after step 5, you should find yourself in a root shell where you can run your script to install your software and make the rest of your required changes, like installing system users to run your software (if your script doesn’t do that). When you’re done, type “exit” to exit your virtual nano and clean up. Please report any issues you have with the script here.

  7. After you’ve made the changes you want to the rootfs, you can run nvidia’s “create-jetson-nano-sd-card-image.sh” script with the proper parameters (eg. create-jetson-nano-sd-card-image.sh -o flash_me.img -s 16G -r 200) to generate a sd card image you can flash with etcher. The size (-s) parameter to create-jetson-nano-sd-card-image.sh only defines the minimum size of the sd card used. You can find out the minimum size you will need for an sd card by running “sudo du --si -sH rootfs” from within Linux_for_Tegra. I recommend rounding up to the nearest power of two sd card size (8G, 16G, 32G, etc…) and then compressing the resulting image with a zip utility. Etcher will unzip automatically so you can flash directly from the resulting file.

  8. Finally, flash the image and boot. The first boot scripts will be execute and the root filesystem expanded just like a regular nano image. Host ssh keys will be generated. The gui setup will be run so hostname and so forth can be set uniquely.

I am probably forgetting something, so please let me know if you run into any issues.

Hello,Mdegans:
Thanks very much for your reply in the forum! Solved my big problem! The installation procedure successfully generated a usable image file. Some new questions need your help!

My system environment is Ubuntu 1804
The steps are:

  1. Download the driver and system in “sdkmanager”.
    Driver archive: Tegra210_Linux_R32.2.3_aarch64.tbz2
    System compression package: Tegra_Linux_Sample-Root-Filesystem_R32.2.3_aarch64.tbz2
  2. Decompress the driver archive, and then extract the system archive to the rootfs directory
  3. Executed the “apply_binaries.sh” script
  4. “enter_chroot.sh” enters the nano system and uses apt to download some software for testing and then exits.
  5. The following steps are operated according to the content of your reply, and the process is very smooth.
    There are two questions that you need to look at now:
  6. I want to preset an account when making image files. Now there is a new problem, when I use “enter_chroot.sh” to create a user after entering the nano virtual environment. The system was made with this image. After booting, this account is gone and needs to be created again.
  7. After the first boot, there is an GUI for entering the password. I cannot enter the desktop after entering the password. Is it caused by not installing the corresponding software? ps: After logging in to the system with tty, the tab key has no completion function. Pressing the “↑” key will display “^ [[A” first. There are very few pre-installed software.
  8. There is still a small problem. The program I compiled myself has a new dependency library file “libnvdla_compiler.so”. Is this due to the relationship between the new system? How do I find this library?

In TTY mode, when I enter the “bash” command, I can use the “tab” and arrow keys normally. This system TTY starts “sh” by default

Can you post/pm a script or some sort of log of the exact steps you ran (especially within the chroot). This is a reason I suggested a script might be useful.

I will try to replicate and fix. My feeling is that this all revolves around the user creation. Even those commands alone would be useful to replicate.

When you say “cannot enter the desktop”, do you mean the password is rejected or does it hang?

When you log in from a tty, I am assuming the password works, but you say you don’t get bash. What happens when you run /bin/bash from the tty?

What is shown under “Shell” when you run the “finger” command for the user you created?

After First of all thank you for your reply, I am here to re-create the image file, there is no recurring problem. It may be that I used incorrect instructions after using “enter_chroot.sh”, which caused this situation.

I used sdkmanager to download all the nano files, but how to install these in the image file. For example, cuda, tensorrt, cudnn. I thought I used enter_chroot.sh, then dpkg -i …, and it didn’t feel right

Possibly. What commands did you run within the chroot? Your problems seems to be related to user creation, which is why I asked about that part. Do you have a script you created at step 1, or the command you ran to create your user? I’m asking because Nvidia’s first boot scripts reset the default user so if you create one before those scripts run, you must use certain parameters to avoid it (eg. make a system user).

Edit: updated post after testing:

If you are using JetPack 4.3, you can use apt-get to install those packages from within the chroot. All many of their package names start with “nvidia”.

You will have to edit rootfs/etc/apt/sources.list.d/nvidia-l4t-apt-source.list to match the following for Nano:

deb https://repo.download.nvidia.com/jetson/common r32 main
deb https://repo.download.nvidia.com/jetson/t210 r32 main

To see what packages are available (edit: for better solution, see this post), run “apt list nvidia*” within the chroot (after apt update) like this

root@hostname:/# apt list nvidia*
Listing... Done
nvidia-cg-doc/bionic 3.1.0013-3 all
nvidia-container-csv-cuda/stable 10.0.326-1 arm64
nvidia-container-csv-cudnn/stable 7.6.3.28-1+cuda10.0 arm64
nvidia-container-csv-tensorrt/stable 6.0.1.10-1+cuda10.0 arm64
nvidia-container-csv-visionworks/stable 1.6.0.500n arm64
nvidia-container-runtime/stable 3.1.0-1 arm64
nvidia-container-toolkit/stable 1.0.1-1 arm64
nvidia-cuda-doc/bionic 9.1.85-3ubuntu1 all
nvidia-docker2/stable 2.2.0-1 all
nvidia-jetpack/stable 4.3-b134 arm64
nvidia-l4t-3d-core/stable,now 32.3.1-20191209225816 arm64 [installed]
nvidia-l4t-apt-source/stable,now 32.3.1-20191209225816 arm64 [installed]
nvidia-l4t-bootloader/stable,now 32.3.1-20191209225816 arm64 [installed]
nvidia-l4t-camera/stable,now 32.3.1-20191209225816 arm64 [installed]
nvidia-l4t-ccp-t210ref/stable,now 32.3.1-20191209225816 arm64 [installed]
nvidia-l4t-configs/stable,now 32.3.1-20191209225816 arm64 [installed]
nvidia-l4t-core/stable,now 32.3.1-20191209225816 arm64 [installed]
nvidia-l4t-cuda/stable,now 32.3.1-20191209225816 arm64 [installed]
nvidia-l4t-firmware/stable,now 32.3.1-20191209225816 arm64 [installed]
nvidia-l4t-graphics-demos/stable,now 32.3.1-20191209225816 arm64 [installed]
nvidia-l4t-gstreamer/stable,now 32.3.1-20191209225816 arm64 [installed]
nvidia-l4t-init/stable,now 32.3.1-20191209225816 arm64 [installed]
nvidia-l4t-initrd/stable,now 32.3.1-20191209225816 arm64 [installed]
nvidia-l4t-jetson-io/stable,now 32.3.1-20191209225816 arm64 [installed]
nvidia-l4t-jetson-multimedia-api/stable 32.3.1-20191209225816 arm64
nvidia-l4t-kernel/stable,now 4.9.140-tegra-32.3.1-20191209225816 arm64 [installed]
nvidia-l4t-kernel-dtbs/stable,now 4.9.140-tegra-32.3.1-20191209225816 arm64 [installed]
nvidia-l4t-kernel-headers/stable,now 4.9.140-tegra-32.3.1-20191209225816 arm64 [installed]
nvidia-l4t-multimedia/stable,now 32.3.1-20191209225816 arm64 [installed]
nvidia-l4t-multimedia-utils/stable,now 32.3.1-20191209225816 arm64 [installed]
nvidia-l4t-oem-config/stable,now 32.3.1-20191209225816 arm64 [installed]
nvidia-l4t-tools/stable,now 32.3.1-20191209225816 arm64 [installed]
nvidia-l4t-wayland/stable,now 32.3.1-20191209225816 arm64 [installed]
nvidia-l4t-weston/stable,now 32.3.1-20191209225816 arm64 [installed]
nvidia-l4t-x11/stable,now 32.3.1-20191209225816 arm64 [installed]
nvidia-l4t-xusb-firmware/stable,now 32.3.1-20191209225816 arm64 [installed]
nvidia-prime/bionic-updates 0.8.8.2 all
nvidia-settings/bionic-updates 390.77-0ubuntu0.18.04.1 arm64

I believe nvidia-l4t-jetson-multimedia-api is the one you are looking for, but on my rootfs this was already installed. The OTA system is new. Somebody from nvidia should answer what packages provide those individual dependencies you need.

I did some tests myself, and it seems some packages are not prefixed by “nvidia…”.

Deepstream 4.0 can be installed within the chroot after setting up the sources like:

apt install deepstream-4.0

TensorRT seems to be “tensorrt” so:

apt install tensorrt

You can use “apt list” with wildcards as shown in the above post to search for more (as well as “apt search nvidia”). There is probably a more elegant way to list all the packages in a repo, but i’m unsure of it offhand (edit: there is, see post below). In any case these commands should allow you to pre-install these packages on the rootfs so they are already installed on your rootfs, avoiding having to use SDKM to flash the board or install those packages directly.

So I googled, and this is the way you can find a list of all the packages (from within the chroot):

root@hostname:/# grep ^Package /var/lib/apt/lists/repo.download.nvidia.com_jetson_t210_dists_r32_main_binary-arm64_Packages | awk '{print $2}' | sort -u
graphsurgeon-tf
jetson-gpio-common
libnvinfer6
libnvinfer-bin
libnvinfer-dev
libnvinfer-doc
libnvinfer-plugin6
libnvinfer-plugin-dev
libnvinfer-samples
libnvonnxparsers6
libnvonnxparsers-dev
libnvparsers6
libnvparsers-dev
nvidia-jetpack
nvidia-l4t-3d-core
nvidia-l4t-apt-source
nvidia-l4t-bootloader
nvidia-l4t-camera
nvidia-l4t-ccp-t210ref
nvidia-l4t-configs
nvidia-l4t-core
nvidia-l4t-cuda
nvidia-l4t-firmware
nvidia-l4t-graphics-demos
nvidia-l4t-gstreamer
nvidia-l4t-init
nvidia-l4t-initrd
nvidia-l4t-jetson-io
nvidia-l4t-jetson-multimedia-api
nvidia-l4t-kernel
nvidia-l4t-kernel-dtbs
nvidia-l4t-kernel-headers
nvidia-l4t-multimedia
nvidia-l4t-multimedia-utils
nvidia-l4t-oem-config
nvidia-l4t-tools
nvidia-l4t-wayland
nvidia-l4t-weston
nvidia-l4t-x11
nvidia-l4t-xusb-firmware
python3-jetson-gpio
python3-libnvinfer
python3-libnvinfer-dev
python-jetson-gpio
python-libnvinfer
python-libnvinfer-dev
tensorrt
uff-converter-tf
root@hostname:/# grep ^Package /var/lib/apt/lists/repo.download.nvidia.com_jetson_common_dists_r32_main_binary-arm64_Packages | awk '{print $2}' | sort -u
cuda-command-line-tools-10-0
cuda-compiler-10-0
cuda-core-10-0
cuda-cublas-10-0
cuda-cublas-dev-10-0
cuda-cudart-10-0
cuda-cudart-dev-10-0
cuda-cufft-10-0
cuda-cufft-dev-10-0
cuda-cuobjdump-10-0
cuda-cupti-10-0
cuda-curand-10-0
cuda-curand-dev-10-0
cuda-cusolver-10-0
cuda-cusolver-dev-10-0
cuda-cusparse-10-0
cuda-cusparse-dev-10-0
cuda-documentation-10-0
cuda-driver-dev-10-0
cuda-gdb-10-0
cuda-gdb-src-10-0
cuda-gpu-library-advisor-10-0
cuda-libraries-10-0
cuda-libraries-dev-10-0
cuda-license-10-0
cuda-memcheck-10-0
cuda-minimal-build-10-0
cuda-misc-headers-10-0
cuda-npp-10-0
cuda-npp-dev-10-0
cuda-nsight-compute-addon-l4t-10-0
cuda-nvcc-10-0
cuda-nvdisasm-10-0
cuda-nvgraph-10-0
cuda-nvgraph-dev-10-0
cuda-nvml-dev-10-0
cuda-nvprof-10-0
cuda-nvprune-10-0
cuda-nvrtc-10-0
cuda-nvrtc-dev-10-0
cuda-nvtx-10-0
cuda-samples-10-0
cuda-toolkit-10-0
cuda-tools-10-0
deepstream-4.0
libcublas10
libcublas-dev
libcudnn7
libcudnn7-dev
libcudnn7-doc
libnvidia-container0
libnvidia-container-tools
libopencv
libopencv-dev
libopencv-python
libopencv-samples
libvisionworks
libvisionworks-dev
libvisionworks-samples
libvisionworks-sfm
libvisionworks-sfm-dev
libvisionworks-tracking
libvisionworks-tracking-dev
nvidia-container-csv-cuda
nvidia-container-csv-cudnn
nvidia-container-csv-tensorrt
nvidia-container-csv-visionworks
nvidia-container-runtime
nvidia-container-toolkit
nvidia-docker2
opencv-licenses
vpi
vpi-dev
vpi-samples

You should be able to apt install pretty much anything on that list from within the chroot

So, I just booted the system after installing deepstream, tensor_rt, and a bunch of other stuff. First boot scripts executed as intended and I’m left with a system that requires no updates and already has all the stuff I want installed. Neat. If you run into any difficulties, don’t hesitate to ask.

hi mdegans,
i folllowwed your steps and stuck at installaing cuda, tensorrt, cudnn.

I also added in rootfs/etc/apt/sources.list.d/nvidia-l4t-apt-source.list to:
deb https://repo.download.nvidia.com/jetson/common r32 main
deb https://repo.download.nvidia.com/jetson/t210 r32 main

after that i ran apt update but its failed because my jetpack version is 4.2

Example note:(NVIDIA Jetson NANO/TX1 - Jetpack 4.2 [L4T 32.1.0]),(tensorflow-1.14)

So,what is the alternatives for jetpack 4.2 to install cuda-10.0,tensorrt-5.1,cudnn

apt update error:

Get:1 https://repo.download.nvidia.com/jetson/common r32 InRelease [2,541 B]
Get:2 https://repo.download.nvidia.com/jetson/t210 r32 InRelease [2,555 B]
Hit:3 http://ports.ubuntu.com/ubuntu-ports bionic InRelease   
Hit:4 http://ports.ubuntu.com/ubuntu-ports bionic-updates InRelease
Hit:5 http://ports.ubuntu.com/ubuntu-ports bionic-backports InRelease
Hit:6 http://ports.ubuntu.com/ubuntu-ports bionic-security InRelease
Err:1 https://repo.download.nvidia.com/jetson/common r32 InRelease
  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 0D296FFB880FB004
Err:2 https://repo.download.nvidia.com/jetson/t210 r32 InRelease
  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 0D296FFB880FB004
Reading package lists... Done
W: GPG error: https://repo.download.nvidia.com/jetson/common r32 InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 0D296FFB880FB004
E: The repository 'https://repo.download.nvidia.com/jetson/common r32 InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
W: GPG error: https://repo.download.nvidia.com/jetson/t210 r32 InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 0D296FFB880FB004
E: The repository 'https://repo.download.nvidia.com/jetson/t210 r32 InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

Hi, thohtdelta,

Nvidia’s online repos probably only work with 4.3 and up. Nvidia’s key is not installed on 4.2 rootfs and even if it was, it would probably break things.

Hello,mdegans,
Thank you for your support and detailed answers. The previous problem did occur because the account was created after enter_chroot.sh and then the image was created.
Due to my network limitation (source.list you provided), I cannot use it as a download source. I consider other ways. One more thing. Going back to the original question, this is not my original intention to make images. He and I have different expectations. I don’t know which scripts were executed after the system was first turned on. I want to be able to skip scripts used to create accounts or preset accounts. After burning the image, you can use Jetson nano directly. What should i do, could you give me some advice.and most of the time, I can not use Google

Creating a user will probably work if you use a uid and gid other than 1000. I recommend choosing uid < 1000 for a daemon or app you nee to run (adduser --system) and if you need any interactive user to login start at 1100 or something (adduser --gid 1100 --uid 1100). The Nvidia first boot should only reset uid/gid 1000. More details:

http://manpages.ubuntu.com/manpages/xenial/man8/adduser.8.html

I am not sure what you mean by “network limitation”. It should work fine. If you’re not allowed to o connect the device to the internet for security reasons, you probably must use dpkg to install or a local apt mirror.

If you want to skip certain first boot things, most of Nvidia’s first boot scripts are in /etc/systemd and the unit files (.service) that execute them live in /etc/systemd/system

To my recollection, all start with nv. You may wish to review them and disable as appropriate to your use case.

Hi mdegans,
Thanks for your help but I haven’t done it yet. Can you make tutorial video for everyone !

Hi mdegans,
Thanks for your help but I haven’t done it yet. Can you make tutorial video for everyone !

Sure. I think I can manage that after I am done with the next version of the project I am working on. Anything specific you want me to cover? Just how to package your software and master a custom image with your software preinstalled?

1 Like

Thanks mdegans very much. I want backup my image file jetson nano to another SDcard. I am looking forward to your video tutorial. I’m not very good at English so I hope you help !