How to install JetPack4.4 in my host PC(ubuntu18.04)

Hi,
We will develop a carried board with Jetson xavier NX , we will change the kernel source for some peripheral device with JetPack SDK.
1, in host PC. I download Host Component with using sdkmanager
file list:
cuda-repo-cross-aarch64-10-2-local-10.2.89_1.0-1_all.deb
cuda-repo-l4t-10-2-local-10.2.89_1.0-1_arm64.deb
cuda-repo-ubuntu1804-10-2-local-10.2.89-440.40_1.0-1_amd64.deb
Tegra_Linux_Sample-Root-Filesystem_R32.4.3_aarch64.tbz2
Jetson_Linux_R32.4.3_aarch64.tbz2

Now, I can build the system image with Tegra_Linux_Sample-Root-Filesystem_R32.4.3_aarch64.tbz2 and Jetson_Linux_R32.4.3_aarch64.tbz2. But I don’t know how to build the other Jetpack function(TensorRT, cuDNN,CUDA, Computer Vision…) into the system image in my host PC.

Those packages/files are not normally installed to the image which is flashed. Normally this occurs only from a running system after flash and full reboot with first boot setup being completed. What the SDK Manager would normally do is fully flash the Jetson, the Jetson would reboot, the end user would complete first boot setup for time zone and a login user name, and then SDKM would log in over ssh to that name; once logged in “sudo apt-get install ...packages...” would occur. The image on the host PC would never have those packages installed.

Your host PC, after a flash has been performed, will have this directory:
`~/nvidia/nvidia_sdk/JetPack_…version…/Linux_for_Tegra/’

Within this is a shell script, “apply_binaries.sh”. You might be interested in studying this to see how it uses QEMU to add certain packages.

Alternatively, check this out from @mdegans:
https://forums.developer.nvidia.com/t/tx2i-modify-rootfs-to-create-new-user-during-flashing/118076/5

This script is more or less a derivative of all of the above, and is specialized at creating a first boot user directly into the rootfs, and could be modified to add packages as well:
https://forums.developer.nvidia.com/t/jetson-nano-all-usb-ports-suddenly-stopped-working/75784/37

An alternative approach would be to set up a Jetson the way you like it, and then clone the system. You could then flash with that image instead of generating a new image each time. If you do this, then you would probably want to first delete anything set up for the specific Nano, e.g., something depending on the ethernet MAC address would not be valid on the next Jetson, and perhaps you want to alter passwords.

Regarding clones: A clone generates both a “raw” image and a “sparse” image. Only the raw image can be observed, edited, and examined. The sparse image can be used for flash, but you will have no ability to do anything else with it. The raw image is much larger, and thus takes longer to copy as a file or flash. However, if you have an edited raw image, then the “mksparse” tool can be used to create a smaller sparse image from this (you’d use “NULL” or “0” for the fill pattern). If you clone, then I highly recommend throwing away the sparse file and keeping only the raw file. When you are done you can run “bzip2 -9” on the image, which will take a very long time to compress, but then it is much smaller for storage.

Instructions for cloning differ depending on hardware and release. If you are using an SD card version of the NX, then you don’t even need to use the actual Jetson to clone…you can just use dd from the host PC in the SD card. if you have an eMMC version of the NX, and a recent L4T release, then from “Linux_for_Tegra/” the following would be a typical clone:
sudo ./flash.sh -r -k APP -G my_backup.img ...jetson model...
…where I think “jetson model” for eMMC NX would be “jetson-xavier-nx-devkit-emmc”. This would produce both sparse “my_backup.img” and raw “my_backup.img.raw”. Both are very large files. I would recommend deleting “my_backup.img” immediately, and making sure you have a safe unaltered copy of “my_backup.img.raw” somewhere. Just to emphasize again, these are very large files, and the raw image is most of the size of the entire eMMC for eMMC models.

Note that in normal flash the “driver package” (the Jetson_Linux_R32.4.3_aarch64.tbz2) is unpacked (do not use sudo to unpack if you manually use this) creates the content of “Linux_for_Tegra/”. The “Tegra_Linux_Sample-Root-Filesystem_R32.4.3_aarch64.tbz2” is an unmodified pure Ubuntu which is unpacked into “Linux_for_Tegra/rootfs/” (and this must be unpacked with sudo). When “sudo ./apply_binaries.sh” is run (and SDKM/JetPack does all this for you if you’ve ever flashed once with SDKM), then basic drivers and libraries required for a Jetson to run normally with hardware acceleration are installed on top of the “rootfs”.

In older releases apply_binaries.sh just used sudo to unpack files. In newer releases QEMU is used, and the content is added as actual “.deb” packages while making the host PC pretend it is arm64/aarch64. You could extend this QEMU method to install other “.deb” files, e.g, the “cuda-repo-l4t-...” would go in first, and then other packages could in theory be added from NVIDIA’s repository via QEMU (I have not done this though, so I am uncertain as to what issues you might run into, e.g., perhaps QEMU would take a lot of effort to set up ethernet, and apt-get won’t work without ethernet).

All those steps are why it is often easier to just clone. Any time you use a clone (either raw or sparse) and copy it to “Linux_for_Tegra/bootloader/system.img”, and then flash with the “-r” option to avoid generating a new image, then it should “just work”. One caveat: If the custom image is not a default size, then you may have to use the “-S size” to get correct partitioning (and that size is the size of the raw clone; you cannot know the actual size from a sparse clone).

Tip: Your flash parameters will determine which kernel, and some related kernel support, gets copied into “rootfs/boot/”. You may need to put a custom kernel somewhere in “Linux_for_Tegra/kernel/” prior to flash even if you use a clone or have a customized rootfs. If you flash once with no special options, and save a log, then this will immediately tell you which files were used with your specific flash. Example with log:
sudo ./flash.sh jetson-xavier-nx-devkit-emmc mmcblk0p1 2>&1 | tee log_flash.txt

these files which download by SDKM, cant’t install in jetson os(ubuntu18.04) using dpkg -i xxxx.deb
cuda-repo-cross-aarch64-10-2-local-10.2.89_1.0-1_all.deb
cuda-repo-l4t-10-2-local-10.2.89_1.0-1_arm64.deb
cuda-repo-ubuntu1804-10-2-local-10.2.89-440.40_1.0-1_amd64.deb
Tegra_Linux_Sample-Root-Filesystem_R32.4.3_aarch64.tbz2
Jetson_Linux_R32.4.3_aarch64.tbz2

and , How can I install thes packages? (TensorRT, cuda and so on)

The above just adds a repositories to enable seeing NVIDIA’s content. What is the message you get when installing those two in the same command directly on the Jetson? None of the others are likely to work if that is not in place. Hint: Any “repo” file adds a named repository somewhere in “/etc/apt/”. Following install of this one would then have to use “sudo apt update” before those changes can be useful (prior to update the new content will remain essentially invisible).

The following is not intended to install to a Jetson, and it is an error to attempt to install this:

…"amd64" is for a desktop PC. Keep in mind that JetPack/SDKM is also intended to install some software to the host PC. Never install anything “amd64” to a Jetson.