Nvidia Jetson and Buildroot

Hi all,

I want to build a minimal computer-vision based system using Jetson Nano and Buildroot. Is it possible ?

Thanks in advance,
Khang

2 Likes

I once used buildroot to build a root filesystem for a Tx2; it ought to work for Nano.
The problem you may run into is integrating the binary-only parts of the system. Depending on our exact use-case you might not need the binary-only components and this might be a non-issue.

1 Like

Hi D3,

Thanks for your info. I wonder if you could share your experience on installing the Buildroot customized rootfs into Tx2, please ?

Thanks in advance,
Khang

Khang,

I didn’t dig very deeply. I built a root fs and booted it. Beyond that I didn’t really explore much further.

I posted my configuration on this thread: [url]Embedded Linux - Jetson AGX Xavier - NVIDIA Developer Forums

Regards,
Greg

Hi @khang.letruong,

I know this is an old thread, but wanted to let you know that I have a Buildroot port for the Jetson Nano (4GB SD card model - p3450-0000) that will be submitted to the main Buildroot project soon. There’s still a lot of work left for system configuration and packaging of NVIDIA software, but the base board configuration works.

After checking out the branch, you can build the firmware with:

$ cd buildroot
$ make jetson_nano_defconfig
$ make all

The resulting image will be generated here buildroot/output/images/sdcard.img.

Kind regards,
Graham

3 Likes

@gleva That’s great news. Would you mind sharing a link to the upstream patch and discussion once submitted?

1 Like

Hi @gleva,

It’s very nice of you to inform! I’m a fan of Buildroot!

K.

Hi @gleva,
Hope you have a better time than I do every time I have to get the latest release for Jetson devices on our buildroot

Best Regards,
Juan Pablo

Hi @juan.tettamanti,

I had some trouble for the first release of the Nano 2GB firmware (32.4.4?), and I had some doubts about whether the URLs would still be accessible without login, but I noticed that with the release of 32.5 BSP URLs no longer require login. Is this the difficulty you’re referring to?

Bumping all of the dependency repositories for building the kernel in Buildroot is quite time consuming in general though, I agree :) You might also take a look at the Open Embedded for Tegra project, which I believe is used by the Yocto ports for the Jetson line. The OE4T project builds the kernel as a monorepo and also carries patches for newer versions of GCC and U-Boot, if I recall correctly. There’s another implementation of the Nano Buildroot port currently under review that depends on this, rather than building the L4T kernel within Buildroot.

Kind regards,
Graham

Hi @gleva,
The urls no longer requiring a login is certainly a step forward, I hope I get to see more things in that line in the future.

I’ll try to mention a few things I’ve ran into, though you might see them in a different light than I do:

  1. When working on a buildroot project, one of my first steps is to compare the source code against mainline.
    This process does indeed take some time, but is usually maneageable. It also gets easier for succesive releases.
    However, the amount of changes against mainline and between succesive releases for the Nano is huge.
    In addition, the out-of-tree parts of the code are split between about 6 different repositories.
    I also found the dts structure for the boards quite troublesome to follow.

  2. Some folks (like me) need to build rather small images for their embedded systems.
    Unfortunately, since there is no source code available for cuda, tensorrt and others, it’s not possible to compile leaner versions of those libraries.
    The libraries on the jetson release also seem to come with a lot of extra code which can only be pruned with Nvprune on arm (x86 is not good for that).
    As a consequence, instead of a clean set of packages that work on your sources, I have a package that works on several manually created binaries.

  3. And then, there are even more unexpected things.
    Even after you get to the point where you think you’ve found everything you need among the wildly varying population of tgz files available for download, you’ll have to deal with new divisions between libraries, track down every last one of their dependencies and watch as some of them explode because you missed a simlink that was created as part of the latest release.

Even if what I described about sounds like part of the normal development process, I think it could be easier if things were better organized and there was more source code available.

Moving forward, I think what most people would expect from a buildroot release for an Nvidia Jetson product is the ability to use CUDA and TensorRT as well.
I also believe that, if someone is looking for a buildroot version, he/she might expect a smallish file system image and some degree of customizability.

Best Regards,
Juan Pablo.

1 Like

Hi @juan.tettamanti,

I think you raise some very valid issues and these are pain points I know others are experiencing too. I can’t speak for NVIDIA on this matter – my opinions are just my own and my Buildroot work has been done purely in my free time, but I’ll try to respond to your points as you enumerated them:

  1. Amount of source code changes between successive releases – this is something I’m not sure will get better. As I understand it, a lot of effort goes into crafting stable release versions that are shared between many different divisions at NVIDIA (embedded, robotics, automotive, mobile, etc). Sorry, I wish I had a better answer here.

    As you noted, the out of tree portions of the kernel are split between 6+ different repositories, and without patches to the kernel build scripts they all have to be carefully arranged in a specific configuration. This is really painful to maintain in Buildroot (speaking from experience here). The build complexity means the kernel isn’t really consumable by itself for end-users, and I think there’s room for improvement. I’d personally like to see NVIDIA offer a single download point for a fully integrated kernel that downstream projects like Buildroot or Yocto could consume more easily. I don’t know if this is feasible, but it’s something I plan on looking into.

  2. Image size – I think the ability to build very minimal embedded systems is one of the best parts of Buildroot. I know this is absolutely critical for some users, and this is one of the reasons I think Buildroot ports for the Jetson boards is a great idea. Can you share the image sizes you’re looking at? I’ve mostly been focused on the base Buildroot images with a BusyBox filesystem, which range from ~125MB (r32.4.3) to ~150MB (r32.5). I thought this was great considering the default SD card images start around 14GB.

  3. Source code organization – I agree with you that this can be painful. I recently spent several hours over a weekend trying to figure out how to cross-compile CUDA for x86_64 to aarch64 (mostly due to code/resource organization). This is an issue I’d like to see improved too. I can tell you I’ve reached out to our technical writers to see if we can improve cross-compilation instructions and have been working to get some of the issues myself and others encountered fixed.

I very much appreciate your feedback and want you to know it is well received. While I don’t work directly with our embedded teams, I will do my best to pass this feedback on to them too.

Kind regards,
Graham Leva

1 Like

Hi @gleva,
Regarding image size, I did a fast check on my rootfs size with cuda and it was about 215MB, so your 150MB without any extra stuff seems correct.

However, if you need something like the nvidia performance primitives or tensorrt, it’s a different story.
If you happen to use the shared libraries, you’ll be looking at 340MB for cublas, 605MB for cufft, 450MB for npp, 1.4GB for the inference portion of cudnn and 720MB for tensorrt.
It will still be lower than the Ubuntu images at about 3GB for everything mentioned above.

For my specific use case, I was able to keep part of the functionality while keeping the image size under 350MB (tensorrt included).
I suspect there is room for improvement on the shared library side if you had versions specific to the Nano.

Best Regards,
Juan Pablo.