How to clone Jetson Nano SD Card?

@shankarjadhav232

To enable Nvidia apt sources, edit /etc/apt/sources.list.d/nvidia-l4t-apt-source.list so it looks like this (replace <SOC> with t210 on Nano). On NX, it would be t194 instead, but the same procedure applies.

deb https://repo.download.nvidia.com/jetson/common r32.4 main
deb https://repo.download.nvidia.com/jetson/t210 r32.4 main

The key should already be installed. It’s only necessary to install it if you are using an alternative Ubuntu aarch64 rootfs like Ubuntu Base or one created with a bootstrap tool.

Once you do that, you can just apt update && apt install deepstream-5.0 to install deepstream (or whatever).

@mdegans
I see my mistake now. I was intentionally using an older version thinking it was the one included in Jetpack 4.3, but there are 2 versions of L4T identified as being part of Jetpack 4.4. I need to go back one more version. Thanks again.

Edit: Interestingly, there is no CUDA directory under /usr/local/ even though I ran the apply_binaries script. I’ll be curious to see if it exists when I create the new image.

apply_binaries.sh” does not add optional packages. The image flashed will always be a “basic” image. What this script does is to add NVIDIA drivers to an otherwise purely Ubuntu image.

The CUDA content in “/usr/local/” requires install after first boot setup is complete, and uses networking to log in to the account you just created, followed by scp to the Jetson and running the dpkg install commands.

Note: If you get a build set up the way you want, then you could clone it, and then installing from the clone would already have all of that in place.

@linuxdev
Thanks for the info. As for cloning, you have come full circle. Check the title of the thread, lol. Directly cloning a Nano that has been setup already is the idea of this discussion. How? Through much discussion, it seemed like it is a bad idea, and instead we should manipulate the rootfs dir before setup to include packages we want.

Is there any comprehensive list of things that are included in the SD Card image from the Jetson Download Center that will not exist in rootfs after the apply binaries script has been run?

This is just some general talk about the topic, and is definitely not a comprehensive list, but I must warn you this is long (you asked for it :P, but a lot of people ask about some part of this anyway). YMMV. If you search for the “dd if=”, then you’ll get to the actual parts about using dd to clone an SD card. The dd command has a summary at the end if impatient.

At power up, every computer has a need to set up power rails, and to enable clocks in some specified sequence. On a PC much of this is done by the BIOS/UEFI/CMOS. For a PC the completion of the BIOS stage presents a uniform interface for operating systems expecting install or boot, and thus the CMOS and other pre-boot content does not need to be customized by every operating system for every model of motherboard. Embedded systems do not have a BIOS/UEFI/CMOS, and thus this is all done in software separate from the operating system.

If you wanted to perform a backup of a PC, then you wouldn’t bother with backing up the BIOS/UEFI/CMOS. When backing up a Jetson, you do not bother backing up the partitions outside of the rootfs, as these are somewhat equivalent to BIOS/UEFI/CMOS. Granted, some of those partitions contain a bootloader binary which is somewhat equivalent to GRUB, but unless something there is customized and in need of special attention, then any standard flash would restore these anyway. Most installs to a Jetson are just a single rootfs partition, with label “APP”. This is your real content and what you would back up (and there are additions if your hardware is custom, but you’d still always start with the rootfs).

If you flash and say to “reuse the rootfs”, then all of that side content gets flashed again, but this is standard (unless you have modified hardware). There is no reason to avoid flashing this other surrounding “non-rootfs” content.

You could clone this extra non-rootfs content, but this can have some problems you won’t expect. In earlier days this would not have been an issue, but once secure boot started development, the content of these other partitions had to be signed. What happens if you install signed content to hardware with a different key? It will fail to boot. Those partitions are the same, except their signing may differ. Unless the Jetson fuses have been burned to prevent reading the hardware, then the flash will “do the right thing” relative to what key is being used (once fuses have been burned you have to know what key to pass, or the hardware is useless).

If you do have custom hardware, and thus you have modifications to some other partition outside of the rootfs, then you’d place this on the host PC prior to flash, and let flash do the signing. A direct clone of one of these partitions would only work if you (A) removed the signing, and then (B) put the unsigned equivalent onto the host PC before flashing. Why bother doing this if the content (other than signature) is standard and unmodified?

One issue which people can sometimes fail to account for is that the non-rootfs content and the rootfs have some strong dependencies. This is why you cannot mix something like an R28.2 rootfs with an R32.4 flash tool. Cloning won’t care since it only reads content, but once flashed, the rootfs and non-rootfs have to be compatible. Still, a clone of R28.2 rootfs can usually be ported to R32.4, so the clone is useful.

There were some minor patch releases which did not generate a new JetPack/SDKM. Those are compatible, but are not nearly so common.

Clone the rootfs, and then flash everything is how I’d normally say to do it. For a Nano which is a dev kit SD card model, then I will add that you can clone the whole card if the content is going back on the same Nano. There will also be a big exception for a dev kit Nano or NX SD card model since they were designed from the ground up to run entirely from the SD card, and no eMMC is present for some of the secure boot. Notice how there is a generic image available for the SD card model of Nano and NX? There can’t be a signing requirement or this would not exist. I do not know all of the differences between SD card and eMMC models, but life should be simpler for this.

Does your PC have a lot of spare disk space? For example, if you have a 128GB SD card, then a clone will consume 128GB. If your host PC has an extra TB of disk, then that won’t matter (aside from massive files requiring a massive amount of time to manipulate). You can try the following with two SD cards which are the same size, and it should “just work” for a Nano (and if not you can post results/errors). I will assume that the PC names the SD card “/dev/mmcblk1”, but be absolutely certain you name the correct device to avoid cloning or overwriting the wrong partition/drive (note that the “block size” or “bs” used, is just a buffer and for increased performance…regardless of size chosen the result will be the same, but time required will change…dd works on damaged filesystems, and for that case the block size chosen would match the block size of the device, but you don’t need to worry about that):

  1. Create a clone on the PC:
    sudo dd if=/dev/mmcblk1 of=sd_clone.bin bs=1M
  2. Remove that SD card, plug another card in, presumably with ID “/dev/mmcblk1” since the old card is now removed.
  3. Send the clone to the new SD:
    sudo dd if=sd_clone.bin of=/dev/mmcblk1 bs=1M
  4. Try it out for boot.

Alternately, you could just clone the rootfs, and then use flash software when performing an actual flash. The label for that partition is “APP”. While the SD is plugged in to the PC, this will list most everything about your SD card:

sudo gdisk -l /dev/mmcblk1
lsblk -f /dev/mmcblk1

In some cases you will car about the partition’s UUID, and this makes it useful to save a copy of the lsblk -f for later reference. In the gdisk -l the label “APP” will show up, and if this is referring to “/dev/mmblk1p1” (the first partition of “mmcblk1”, corresponding to the “Number” column of gdisk -l), then this means you’d dd copy just “mmcblk1p1”, and not the whole “mmcblk1”. Without the “p1” this is the entire SD card. With the “p1” it is a clone of a single partition. Example:
sudo dd if=/dev/mmcblk1p1 of=app_partition_clone.bin bs=1M

If you’ve ever flashed (versus using a preexisting image without any true flash) you will have this directory on your host PC:
~/nvidia/nvidia_sdk/JetPack...verison.../Linux_for_Tegra/

Normally this would be used to flash a connected SD card model Nano on command line (I think there are a few different targets which can be used, but I have not experimented so I’ll stick to jetson-nano-qspi-sd):
sudo ./flash.sh jetson-nano-qspi-sd mmcblk0p1

Normally the “Linux_for_Tegra/rootfs/” contains a full Ubuntu operating system plus some NVIDIA drivers, and an image is created based on this prior to flash. The actual argument of “jetson-nano-qspi-sd” triggers some copies into the “rootfs/boot/” area, e.g., might edit the extlinux.conf file, but otherwise the image generated is an exact match to the “rootfs/”.

If you had modified the “rootfs/”, for example by adding content to “rootfs/usr/local/”, then the flash would also contain that content. If “rootfs/etc/” had a user’s name and pass added, and perhaps “rootfs/home/” had a user’s home directory, then this too would be 100% preserved every time an image is generated and flashed.

If you have a clone, then you don’t need to generate this image. The generated image normally goes to “Linux_for_Tegra/bootloader/system.img”. If you have an image there already (regardless of whether it is from a clone or from a previous regular flash), then the “-r” option to flash.sh will avoid overwriting what is there.

If you were to place a copy of your clone in “Linux_for_Tegra/bootloader/system.img”, and then flash like this, then your clone would be installed (a clone of the rootfs, not the entire SD card):
sudo ./flash.sh -r jetson-nano-qspi-sd mmcblk0p1
(the only difference is the addition of “-r”)

Note that there might still be one wrinkle in this: The commands have all assumed the image is some default size. You might need to specify the size of the APP/rootfs partition with “-S size”.

Regarding size, this is the size of the fully expanded raw APP/rootfs partition. In a normal flash, and in a normal flash.sh generated clone (this does not apply to using dd and only applies to cloning from flash.sh software), you will actually generate two files: One is a “raw” file, and is the exact byte size of the partition. The other is a “sparse” file, and so long as the filesystem is not filled, this will be a lot smaller than the “raw” image. A normal flash puts the raw image at “Linux_for_Tegra/bootloader/system.img.raw”, and the sparse image at “Linux_for_Tegra/bootloader/system.img”. Either results in the same end flash, but the sparse image flashes faster.

If you look at the actual byte size of the raw clone, then this will be divisible by 1024 either twice (MiB), or three times (GiB). For example, if dd of APP/rootfs is size “30064771072” bytes, then this can be divided evenly by 1024 three times and is “28”. This is “-S 28GiB”. This would be the flash command to use this image when also specifying size:
sudo ./flash.sh -r -S 28Gib jetson-nano-qspi-sd mmcblk0p1

The dd image and the raw image can be loopback covered and lots of nice tricks used, e.g., you can mount the loopback device as if it is a real hard drive partition, and then use QEMU to run various dpkg operations on it. Even without QEMU you can copy files to/from the loopback mounted raw clone (dd copies are raw). You cannot do this with a sparse image…a sparse image is only good for flashing.

Summary to clone APP/rootfs with dd from a host PC when the SD card is “/dev/mmcblk1" and the APP label is the first partition, "/dev/mmcblk1p1": sudo dd if=/dev/mmcblk1p1 of=clone_of_rootfs.bin bs=1M ("bs`” is optional)

Cloning is unsafe for anything but backup and restore to the same exact device. There is no good way to do it. There will be artifacts from development and that will cause serious security and reliability issues.

If you don’t care about security, like if you’re just making images for home use, go ahead, however, the “right way” is the way outlined in the l4t documentation: to master an image using Nvidia’s scripts. It can be done in a VM and that can be automated. The right way is not always the easiest way, but it avoids techincal debt.

It installs the debian packages in Linux_for_Tegra/nv_tegra/... by calling nv-apply-debs.sh, so it installs more than just the kernel and modules. The optional packages can be installed from the online apt repos from within a chroot.

I realize this, but those are not “optional”. When I say optional, I am implying that when running SDKM and you get to the list of optional goodies, e.g., CUDA, sample programs, so on, that nothing from that list is installed to the image prior to flash. nv_tegra is not optional despite being a package format.

1 Like

@linuxdev
Thanks for all the information, that was an interesting read! I am trying to create an image that I can use to flash multiple Nanos, so I think I will stick to the chroot into rootfs method for now.

@mdegans
If I install cuda and tensorRT while chrooted into the rootfs, will it work properly on the Nano? I was worried it might do some hardware examination and bake that into the install. I’m also worried that I am missing other optional componenets installed by the SDK manager. Maybe I could flash the Nano with the pre-built Nano image and then copy the rootfs from there, thus including those components?

Edit: Alternatively, can I use the SDK Manager to install these SDK components in force recovery mode after I flash the Nano with my custom image and then copy that rootfs?

Edit2: It looks like I can just use apt to install the nvidia-jetpack package and that should give me what I want. I will try this out.

Edit3: apt isn’t finding any packages for jetpack. Guess I’ll try grabbing rootfs of Nano after using SDK Manager to install.

Edit4: So I added to /etc/apt/sources.list.d/nvidia-l4t-apt-source.list

deb https://repo.download.nvidia.com/jetson/common r32.3 main
deb https://repo.download.nvidia.com/jetson/t210 r32.3 main

to try to add the repositories with the jetpack packages, but I get

The repository ‘https://repo.download.nvidia.com/jetson/common r32.3 Release’ does not have a Release file.

and same for the other repo.

Those extra components can be installed after you have a complete custom install. However, the Jetson should be fully booted and not in recovery mode. Those “extras” are not part of flash, they are installed over ssh.

Don’t know about the other issues.

The point is that I am trying to install them before first boot. I want them to be in the image so I can flash a bunch of Jetsons and have them ready to go without repeating all the same setup on each Nano.

1 Like

The previous idea of using a clone to flash from would work. @mdegans gave steps to add the right repositories to make adding these from a running system via apt-get. Then clone.

If you want to add these to the sample rootfs prior to flash, then apply_binaries.sh will not add the optional content for you. You could perhaps use QEMU to run apt commands on the sample rootfs as if it were a running Jetson. @mdegans happens to be good at this! :)

There is a script here you could examine, it is for adding an account before flashing so that the first boot content is already in place. You could study this or edit this (it is a script) to work with packages if you know what the packages are:
https://forums.developer.nvidia.com/t/jetson-nano-all-usb-ports-suddenly-stopped-working/75784/37

From @mdegans, very useful to study.

apply_binaries.sh will never add this content. That script is for content required to boot and run, and does not work with anything optional.

1 Like

@loophole64

apply_binaries.sh should install those. If you’re apps are not finding them it might be becaues nvcc is not in path by default or the tensorrt version might not be waht they expect. apply_binaries.sh uses chroot internally to do the same thing as the script. Any “optional components” can be installed via apt-get. I would recommend against using SDK Manager at all.

Re: apt lists, try this:

deb https://repo.download.nvidia.com/jetson/common r32.4 main
deb https://repo.download.nvidia.com/jetson/t210 r32.4 main

(for JetPack 4.4)

Oh, it absolutely does, but the danger is you can end up with artifacts things like identical ssh host keys (normally generated on first boot). That leads to all machines having the same ssh host keys which leads to insecure ssh configurations, which can lead to things like supply chain attacks. I have seen people disable ssh warnings because of this reason, leaving the development environment vulnerable. Ssh host keys are just one consideration of a host of things that could cause problems.

There is no safe way to clone a SD card once it’s booted. You would need something like virt-sysprep to try and remove such dangers off the rootfs. Besides this, a bit for bit clone of an SD card might include deleted stuff as well, even from before your last flash. That could be pictures of your kids or sensitive credentials.

It’s absolutely fine if you want to clone for your own home development and you regenerate ssh keys manually and change the hostnames, etc… but if you publish a cloned, raw, sd card image, it could lead to all sorts of issues. There are ways to make it more safe, but never as safe as mastering from a rootfs aided by Nvidia’s scripts.

apply_binaries.sh should install those. If you’re apps are not finding them it might be becaues nvcc is not in path by default or the tensorrt version might not be waht they expect. apply_binaries.sh uses chroot internally to do the same thing as the script. Any “optional components” can be installed via apt-get. I would recommend against using SDK Manager at all.

I understand what you are saying, but the CUDA folder does not exist.

I see that the version of L4T I downloaded before, the mistaken one, is a developer preview. I am starting to think that is why the tools I expected weren’t there. I am going to rebuild the image with the correct version and see if that installs the expected tools.

Thanks again for the help.

Edit: By the way, in order to satisfy my curiousity as a Linux newb:

deb https://repo.download.nvidia.com/jetson/common r32.4 main
deb https://repo.download.nvidia.com/jetson/t210 r32.4 main

My understanding is that this adds these repositories so that apt or apt-get has their packages available once you run apt update. I probably won’t need to use this, but is the reason I couldn’t use r32.3 that the repo owner hasn’t organized the older packages with a release file, or that the packages don’t exist in the repo at all? Is there an alternative way that I could use these repos to install the older versions of the nVidia files if I needed to sometime in the future?

I guess I cheat…I have a skeleton of directories and configurations (including host keys and everything user account related, such as being set up to allow my PC’s ssh keys without ever touching it again) for each of my Jetsons, and I recursively copy this into the “rootfs/” before a flash.

If you do this enough, then I suggest building up a “skeleton” of directories and files (properly preserving permissions and numeric ID) custom to a particular Jetson. Saves a lot of time. Even if you don’t just copy those in place, then it works as a guide map when looking at things like what to consider when doing mass flash. My router also happens to have a specific address via MAC, so basically I just flash and run and it is all there “just working”.

A similar approach could be taken where a script is used to make edits to whatever overlay you recursively copy in.

What do you see from “apt search cuda”? Is it empty, or do packages show up?

You are correct that adding repositories makes that repository available to apt once the sudo apt update is completed. Note however that there are several “branches” of some repositories. There is a “main”, a “restricted”, a “universe”, a “bionic-updates”, a “partner”, and a “multiverse”. On a given server apt will search only for the branches in your repository list. Note that Ubuntu 18.04 has code name “bionic”, so “bionic” is also a branch…the 18.04 content.

Repositories are often in separate files in “/etc/apt/sources.list.d/” so as to not pollute the main sources.list file.

I’m not sure which branches the NVIDIA repository has, but all I see by default is “main”.

The repos for JetPack 4.3 are just “r32” rather than “r32.3” which would make sense in context. I assume it’s that way because when the repos were initially setup for JetPack 4.3 the decision was to do things a bit differently. If you do install from “r32” you will need to use a JetPack 4.3 BSP and rootfs.

I doulble checked and it looks like apply_binaries.sh and it should install cuda according to it’s log:

...
Setting up nvidia-l4t-cuda (32.4.3-20200625213407) ...
...

If somehow that fails, you can try installing that package from within the chroot. I will look into this and get back to you. It relates to what I am working on right now anyway.

I install public keys as well for a user I might add, but my host keys are unique per board. The host key is generated on first boot. Otherwise if there is more than one machine with the same host key, ssh complains, and while you can dismiss or disable that, it’s easy to get in the habit of doing that, it can be risky.

For example, if somebody were to compromise one of my boards on my IOT VLAN and somehow managed to force my switch into hub mode (eg, through arp flooding), or otherwise manage a MITM, they could imitate a board and capture a private key. Having a unique ECDSA key per board precludes that possibility.

And yeah, I take precautions so all getting to the point where I have to worry about a MITM is less likely, but I’ve seen some environments where develpment boards ssh ports are exposed to the internet with purely password authentication and really weak passwords (like “password123” bad). I prefer to assume everything is compromised and design things defensively with that in mind.

The skeleton idea is a great one. There’s dotfiles in /etc/skel you can customize as well, for example, to have cuda in path by default for any new users. There are some nice dotfile collections out there you can use as a base instead of the default Ubuntu one. My usual .bashrc prompt is like this:

[username@hostname] -- [/some/working/directory] 
 $ 

And if the user is in the sudo group or is root, the text is red to remind me.

Mine too. But instead of generating them I save the keys for that board. That way if I flash it ends up with the same keys and MAC address combination. ssh just works without complaint when I do that. Makes it easy to jump from one release to another and have it “just work” without any extra network setup. However, I have only one of each type of Jetson. I am guessing you have more than one of a given type (e.g., I have only 1 TX2, 1 Nano, 1 NX, 1 Xavier…I lied, I have 2 TK1s, but each has its own unique “overlay” I can add if I were to flash these antiques). No two boards have the same keys.

None of my Jetsons have special permissions in reaching my host. Only the host has the ability connect to the Jetson. The host has its own separate network for Jetsons, and so security is much easier to deal with. That is all by ssh key, not password, running on a private subnet of a separate ethernet card. Keys can do so much to make a developer’s life simpler, especially combined with a separate ethernet card on an isolated wired router which enforces MAC addresses.

A suggestion to other developers: Don’t give your Jetson any special access to anything. Do give your host PC access via keys instead of passwords.

1 Like