Sample_fs usage when building your own kernel

The documentation seems to be unclear on how to modify your file system.

We downloaded everything from the docs to build a custom kernel. It appears that by default, it comes with a rootfs that is sparse so it only has SSH and serial support, no GUI (per the documentation)

When we looked at how to update the rootfs so we would have GUI support (and OEM configuration), we ran the command that appears to build a sample_fs tar file with all the packages.

What is not clear is what we should be doing to get this new tar to be used and not the rootfs that appears to be in the L4T directory tree.

Or, are we just doing this all wrong.


custom kernel image and rootfs are two different things.
It’s totally fine to build your own kernel while still using our sample rootfs.
Or is is that you don’t want to use the sample rootfs, and want to build your own one?

When we downloaded the kernel and the rootfs with the build and flash kernel instructions, it appears that rootfs does not have GUI enabled. When we bring it up, after the initial Nvidia screen with the boot loader, it goes black and we can only ssh into the machine.

In reading the website, it said that if we want to use the GUI enabled version, we needed a different sample_fs. It also says that this will be configured with OEM settings.

We tried running the command which runs for quite awhile and builds a tar file with sample_fs in the name.

Do we untag those contents and put them in rootfs instead?

When it says OEM configured, how do those settings then affect the kernel settings? Or do they.

Let me take a step back and say what we are trying to do. We’d like to build a file system and kernel for our first pass that exactly matches what we installed with the SDK download. That’s really all we want to do first. Then we can worry about making our customizations on it.


may I know which document page you are referring to?

If you want to build your own rootfs with GUI enabled, you should start with the desktop flavor, and add whatever packages you need to the file nvubuntu-focal-desktop-aarch64-packages.

sudo ./ --abi aarch64 --distro ubuntu --flavor desktop --version focal

Yes, untar the file into Linux_for_Tegra/rootfs/ with root permission just as you will do with the default rootfs.

OEM config has nothing to do with kernel configs. They are basically just some steps you go through upon the first boot to create user accounts and stuff like that.

Hello @DaveYYY - I’m working w/@enc0der on this issue.

Ran the command :

cd ~/nvidia/nvidia_sdk/JetPack_5.1.1_Linux_JETSON_ORIN_NANO_TARGETS/Linux_for_Tegra/tools
sudo ./ --abi aarch64 --distro ubuntu --flavor desktop --version focal

Which created sample_fs.tbz2 :
-rw-r--r-- 1 root root 1544981397 Jul 30 02:23 sample_fs.tbz2

Then expanded in existing rootfs/

cd ~/nvidia/nvidia_sdk/JetPack_5.1.1_Linux_JETSON_ORIN_NANO_TARGETS/Linux_for_Tegra
sudo tar xpf tools/samplefs/sample_fs.tbz2 -C rootfs/

and finished the process to build everything and flash:

export JETPACK=$HOME/nvidia/nvidia_sdk/JetPack_5.1.1_Linux_JETSON_ORIN_NANO_TARGETS/Linux_for_Tegra
export KERNEL_OUT=$JETPACK/images

cp -rfv ./arch/arm64/boot/Image $JETPACK/kernel/
cp -rfv ./arch/arm64/boot/dts/* $JETPACK/kernel/dtb/
sudo cp -arfv modules/lib $JETPACK/rootfs/usr

sudo ./

cd tools/
sudo ./ -u MYUSER -p MYPWD -n orindev --accept-license --autologin

sudo ./tools/kernel_flash/ --external-device mmcblk1p1 -c tools/kernel_flash/flash_l4t_external.xml -p "-c bootloader/t186ref/cfg/flash_t234_qspi.xml"  --showlogs --network usb0 p3509-a02+p3767-0000 internal

But on reboot, console still shows:

Welcome to Ubuntu 20.04.6 LTS (GNU/Linux 5.10.104 aarch64)

 * Documentation:
 * Management:
 * Support:

This system has been minimized by removing packages and content that are
not required on a system that users do not log into.

To restore this content, you can run the 'unminimize' command.

Expanded Security Maintenance for Applications is not enabled.

0 updates can be applied immediately.

29 additional security updates can be applied with ESM Apps.
Learn more about enabling ESM Apps service at```

I tried

sudo depmod -a
sudo reboot

But still no HDMI video (w/the Orin on an Xavier devboard)

Not sure what I missed.


are you using the DevKit by NVIDIA, or a custom carrier board?
It’s Orin NX on Xavier NX DevKit?

Will it work on a DP monitor or when you use our sample rootfs on the download page?

This is a DevKit by NVIDA. It’s an Orin Nano on Xavier NX DevKit. I do not believe we saw output over DP either. And that is with using the sample root that was downloaded.

@anomad can provide links to what we downloaded.

Yes - a DevKit by NVIDIA: an Orin 8GB w/SD card in an NVIDIA Xavier carrier board w/both HDMI and DP.

Yes, we successfully booted and got the desktop running on DP video from the provided images.

Thn we went through the process of downloading and compiling Image, dtbs, modules, modules_install, etc with no changes and reflashed the SD card. We see the NVIDIA spash screen (and frantically press ESC to skip the IPv4/v6/pxe boot options) when plugged in via HDMI. I can ssh to unit, but there is no HDMI video after the splash screen.

I just tried restarting the board with DP hooked up and didn’t even see the splash screen, but watched it through the UART console until it got to a boot prompt.


then can you please check if both DP and HDMI are working when flashed again with the default BSP and sample rootfs?

That was going to be my next step, restart from from a clean os, re-install SDK manager, and flash from provided rootfs/. I will post the results tomorrow. Thank you for your assistance.

You may also need to build the display driver, and run sudo depmod -a upon first boot and reboot.

1 Like

Incidentally, from what I can tell, you are trying to boot to the framebuffer console. This is not yet supported, but from what I know, the next JetPack release will support this. If you are booting to the GUI, then ignore this.

We are trying to boot to the GUI, but it looked to us that the default sample_fs when you pull them down is not configured for that.

That said, thanks for letting us know about the console portion. I was not aware of that with the current state of the JetPack!

I removed ~/nvidia and ~/Downloads/nvidia, relaunched SDK manager and had it download all necessary files. I put the board in recovery mode.

EDIT - removing my incorrect command and output to keep this thread clean and focused.

I ran :

sudo ./tools/kernel_flash/ --external-device mmcblk1p1 -c tools/kernel_flash/flash_l4t_external.xml -p "-c bootloader/t186ref/cfg/flash_t234_qspi.xml"  --showlogs --network usb0 p3509-a02+p3767-0000 internal

DP does not send any video, but HDMI is working as expected.

That does not sound reasonable.
Can you find another DP cable/monitor for cross validation?
Also, what do dmesg and Xorg log give you?

I switched to a monitor that has both HDMI and DP inputs and tested with working cables.

I first booted with DP attached and did not get any video.
I then unplugged the DP cable on the Xavier dev board and connected HDMI to the monitor and the desktop appeared on screen.

What should I be looking for in dmesg and Xorg.0.log ? They are attached here :


Xorg log

Here is dmesg + Xorg.0.log from boot of just DP monitor connected (and not displaying any video)


sorry but I just noticed that Orin NX/Nano only supports HDMI output when used with Xavier NX DevKit, so please use Orin Nano DevKit as the carrier board if you want to use a DP monitor.

Also, the hardware design of Orin series only has 1 video output lane, so it’s not possible to configure it as supporting both HDMI and DP.

That’s fine, we only need HDMI out for our project. We do not need DP.

We want to compile our own kernel with a rootfs/ that is enabled for GUI and outputs on the XavierNX carrier board.

We’ve created the desktop rootfs (sample_fs.tbz2 – from a few posts above).

So, for the next step - I just want to compile default kernel. Do you seeing any obvious errors in this process ?

export TOOLCHAIN_SRC=bootlin-toolchain-gcc-93
export TOOLCHAIN_DIR=gcc-9.3-glibc-2.31
export KERNEL_SRC=l4t-sources-34-1
export KERNEL_DIR=kernel-5.10
export CROSS_COMPILE=$HOME/l4t-gcc/bin/aarch64-buildroot-linux-gnu-
export JETPACK=$HOME/nvidia/nvidia_sdk/JetPack_5.1.1_Linux_JETSON_ORIN_NANO_TARGETS/Linux_for_Tegra
export KERNEL_OUT=$JETPACK/images
export KERNEL_MODULES_OUT=$JETPACK/images/modules
export ROOTFS_DIR=/home/parallels/nvidia/nvidia_sdk/JetPack_5.1.1_Linux_JETSON_ORIN_NANO_TARGETS/Linux_for_Tegra/rootfs/

tar -xjf public_sources.tbz2
cd Linux_for_Tegra/source/public
tar -xjf kernel_src.tbz2

mkdir $HOME/l4t-gcc
cd $HOME/l4t-gcc
tar xvf ~/aarch64--glibc--stable-final.tar.gz

./ -k jetson_35.3.1

cd ~/nvidia/nvidia_sdk/JetPack_5.1.1_Linux_JETSON_ORIN_NANO_TARGETS/Linux_for_Tegra
sudo tar xpf tools/samplefs/sample_fs.tbz2 -C rootfs/

cd ~/nvidia/nvidia_sdk/JetPack_5.1.1_Linux_JETSON_ORIN_NANO_TARGETS/Linux_for_Tegra/sources/kernel/kernel-5.10
make ARCH=arm64 O=$KERNEL_OUT CROSS_COMPILE=$CROSS_COMPILE -j8 tegra_defconfig	

BKUP_DATE=`date "+%Y_%m_%d_%H_%M_%S"`
mv $JETPACK/kernel/Image{,.$BKUP_DATE}
mv $JETPACK/kernel/kernel_supplements.tbz2{,.$BKUP_DATE}
mv $JETPACK/kernel/dtb{,.$BKUP_DATE}

cp -rfv ./arch/arm64/boot/Image $JETPACK/kernel/
cp -rfv ./arch/arm64/boot/dts/* $JETPACK/kernel/dtb/
sudo cp -arfv modules/lib $JETPACK/rootfs/usr

tar --owner root --group root -cjf $JETPACK/kernel/kernel_supplements.tbz2 lib/modules

sudo ./

cd tools/
sudo ./ -u MYUSER -p MYPWD -n orindev --accept-license --autologin

sudo ./tools/kernel_flash/ --external-device mmcblk1p1 -c tools/kernel_flash/flash_l4t_external.xml -p "-c bootloader/t186ref/cfg/flash_t234_qspi.xml"  --showlogs --network usb0 p3509-a02+p3767-0000 internal

Is there any other consideration we should look at ?

Thank you.

The script looks fine to me, but a little question here:

Looks like you don’t need this, as you will tar the kernel modules and run afterwards.
Also, shouldn’t it be lib/modules instead of modules/lib?