Jetson newcomer, trying to work it all out

Sorry in advance for any stupid Qs, but not having had a great deal of Linux experience and zero Jetson experience I’d be grateful for any help!

I am interested in porting my server streaming application from Windows to Jetson. Currently my server hardware is a SFF PC build with a Magewell M.2 capture card and NVIDIA 4060 GPU. It’s about as small as can be in a DAN A4 case and works very well.

The project is a modern take on the old “Slingbox” concept, where the server captures the output from a STB, can also control the STB and then encodes and sends the AV stream to a single player which can be on the LAN, or the other side of the World. SRT is used as the transport protocol. I have written players for macOS/iOS/tvOS/Android/Windows.

Anyway, I’m intrigued to see if I can make the server work on Jetson Orin NX in a super-small form factor as it can accommodate an M.2. capture card plus has NVENC, network card, etc so seems to tick all the boxes.

So my understanding is that Jetson is running a version of Ubuntu 22.04 with some additional modifications, drivers, SDKs etc.

I understand that one needs a carrier board (the place to plug in the Jetson module, plus all the physical ports, etc, so kind of like a sort of motherboard I suppose?) and then the Jetson Orin NX board itself.

There seems to be the concept of a “developer kit” which seems to be a particular combination of carrier board plus module with Ubuntu pre-flashed onto the storage device. I understand that when you boot it you would then have a fully-working Ubuntu OS for building and running applications.

Then there is a production-ready computer consisting of chassis + carrier board + module which is bought with nothing installed onto it at all. There appears to be some Voodoo involved in preparing an image from the dev kit and flashing it onto the production-ready computer but for my current (hobby) usage I think I only need a dev kit to build and run my application in my house, but it would need a chassis as I want to tuck it away in the AV cupboard when testing for prolonged periods.

My Qs are:

  1. If I buy something like this https://www.yuan.com.tw/product/314 can I flash the developer kit myself? Where would I find the image + instructions to do so? Presumably if I screw things up I can start again and flash it again? It seems that I need to have a computer running Ubuntu to do the flashing. Would a Proxmox VM running Ubuntu with USB passthrough work? I have a nice Proxmox-based homelab setup based on some of the latest Intel NUC tech and it would be easy to create an Ubuntu VM.

  2. I presume I can set up a full GUI desktop on Jetson, and VNC into it from my Mac for example?

  3. Are packages like fdk-aac-dev available, for AAC encoding? Or is the version of Ubuntu limited and I won’t find much available? I guess if a library is just built using standard C/C++ then it’s just a case of it building for arm64 so should work? I have no idea how widespread support is for Ubuntu packages build with arm64.

  4. Further down the line, if I wanted to integrate Jetson builds into my TeamCity CI, I could hopefully cross-compile from the Ubuntu VM?

Am I thinking on the right lines?

Except Nvidia Jetson devkit (a Jeton module + carrier board), other partner created product (Jetson module + custom carrier board) will requred partner’s support for custom SW to flash into their product, they should have the instruction to guide customer how to do that, for the development SDKs and applications are remaining the same on the host side.
To flash the SW into device, our suggestion is to have a host machince with native Ubuntu OS, however some developers did try with VM successfully, but we’re not able to help if any issue from there.
But if consideration quicker SW support for Ubuntu 22.04, then find a Jetson Orin Nano devkit might be another choice as we’re going to have the JetPack 6.x release in December.

You can refer to Setting Up VNC | NVIDIA Developer
See also: What is the best way to control the jetson Orin GUI remotely? - Jetson & Embedded Systems / Jetson AGX Orin - NVIDIA Developer Forums

This would need other users to share experience for AAC encoding. But in general, building for arm64 so should work.

A host with native Ubuntu is suggested. We do not try with VM, may other developers to share experience.

Ok thank you so much for taking the time to read and reply to my post.

I’ll likely setup a physical Ubuntu install then, at least to begin with.

Would I be correct in saying that I can install a Jetson Orin NX module into the Jetson Orin Nano dev kit?

Yes, Orin NX module and Orin Nano module are PIN compatible. The Orin Nano developer kit carrier board can support all Jetson Orin Nano and Orin NX modules.
See Announcing Jetson Orin Nano Developer Kit - Jetson & Embedded Systems / Announcements - NVIDIA Developer Forums

Thank you.

I can see that some carrier boards have 2 x M.2. M-key slots, which is what I need because I would put the capture card in one of them. The official dev kit has this.

Is there any problem with using the M.2. 2280 for my capture card, and then the M.2. 2230 for the SSD?

It seems generally with these kits if they come with an SSD it’s the 2280 size one, but I assume that I can use either slot for the SSD and buy a 2230 size SSD? I can see in the official dev kit the 2230 is PCIe Gen 3 x 2 (vs PCIe Gen 3 x 4 for the 2280) but for an SSD that’s ok I guess? SSD read/write speed is not critical at all for my application.

Are there any obvious pitfalls with this arrangement? Does it make it any harder to flash?

You couldn’t use power over USB (even if it worked I think it would fail under load) when adding a couple of m.2 drives. You’d have to use the barrel connector to deliver enough power.

Thanks for that, will bear in mind!

So I’ve installed Ubuntu 20.04 onto a new Intel NUC and my Orin NX carrier board/module/chassis etc are on order.

Aside from SDK Manager, what else will I need on the host PC? To begin with I will compile directly on the module, i.e. no cross-compile etc.

I was looking for a step-by-step guide, but can’t find one - maybe it’s just the SDK manager I need then?

That’s actually a much bigger question than it sounds like. It depends what is on the software you are building. I suppose at the least you need the compiler. Is this C++? If so, then you can find your compiler (named g++) on the Jetson to see if it is there:
which g++

You might want to write down the version and other info via:
g++ --version

The reason I say this is that there have been a lot of C++ feature additions over recent years, e.g., there is C++11 or C++20, and everything in-between. Which C++ standard are you using? That isn’t always easy to answer, but depending on what the answer is it might change either (A) the arguments passed to g++ during compile, or (B) you might need a newer release of g++ (or both release upgrade and arguments added to builds).

One of the other questions is what libraries you might be linking against. Often, if you try to build, you’ll find an undefined header, which is simple to track down. Then you just install the header via the apt-get mechanism.

The GUI itself gets more complicated. You might be starting from scratch with something completely different as many widgets and GUI kits are not cross platform. If you are going with OpenGL, then you also have to know that the release version matters, and you might need to downgrade or upgrade the OpenGL standard you work with. If you work with OpenGL, then on the Linux side you probably want to install the program which queries this, glxinfo:
sudo apt-get install mesa-utils

You can then get a good idea about OpenGL support version with:
glxinfo | egrep -i '(nvidia|version)'

Incidentally, there isn’t enough space on a Jetson to build most of that content, which is why I assume you’ve mentioned the m.2. I think this is what most people would use. It is easier to mount this somewhere for development and use than it is to “replace” the root filesystem with this. Procedures and information change depending on whether this is a purely SD card developer’s kit, versus an eMMC model on a third party carrier board. It is important in this forum to always state exactly which hardware you are using (naming a developer kit model is as precise as it gets; sometimes people use a separate module and third party carrier board though).

Much of the “optional” components you can add via JetPack/SDK Manager are related to CUDA and AI. The more traditional content, e.g., the compiler and libraries, are usually just added via the apt-get mechanism. You don’t need the AI or CUDA components for the average user space program on Linux. You would need the OpenGL driver from NVIDIA for OpenGL development, and you’d want to only use the driver installed when flashing with JetPack/SDKM. If you see NVIDIA in the earlier glxinfo command, then you already have this (although it might be an older release than you want).

If you start to compile and something breaks or is missing, then you can always give an excerpt of that for a new question.

Thank you for your detailed response!

So the software is a video streaming server. So we’re basically talking source capture + encoding + network transmission. Further network code is also used to control the source device (power it on/off, change channel, navigate its UI etc). Most of my libraries are C++, and I compile using the C++20 standard. There a some third party libraries used, like SRT and Botan for encryption. It doesn’t have a user-interface on the server-side (there are a bunch of player apps I wrote for iOS/tvOS/macOS/Windows/Android which obviously have a UI but that’s of no concern here) operating via JSON files for configuration. It looks like I will need the Multimedia API for the encoding and video processing (scaling, specifically), as I do not want to use streamer/FFmpeg as my libraries do all of that stuff using the native APIs already. In fact, I spent a large part of this year migrating away from FFmpeg and learning how to use all the native APIs for decoding/encoding etc on all the platforms I am supporting. I have tried to rely on as few third party libraries as possible, aside from the standard C++ libraries. At present my server only runs on Windows and so for certain parts of the code I only wrote a Windows version. One example would be AAC encoding; only the server needs to do this when the captured audio is raw PCM data, so I’ve used the Windows MFT API for this. Clearly on Jetson I need to use something like fdk-aac instead. Other examples are Service Discovery; the server needs to be able to search for Apple TVs on the network in order to control them, so I used the native WIN32 API for this. On Jetson, I will need to find the Linux alternative (Avahi, maybe?). At this stage I don’t know how many of such libraries are or are not supported on Jetson. I will find out soon!

Project wise, everything is done using CMake. I’m assuming (hoping!) that Jetson has a recent version of CMake. If not, then that’s going to set me back somewhat as the other thing I did this year was migrate all my various projects (Xcode, MSVC, etc) to a unversal CMakeLists.

I guess I was trying to understand the relationship between the Ubuntu host and the target device. It seems that in the Jetson World one can build and entirely develop on the target device itself (or a dev kit version of it, with suitable tools etc installed), right? This differs from, say, iOS or Android, where you always build on a separate machine and just install and run on the target. But I have no idea at this stage what sort of compile times etc I’d get building on the a Jetson Orin NX for example, whether it’s practical for a large project. I will be using a third party board (DSBOARD-ORNX) and 16GB Orin NX, fitted with a Magewell capture card.

From what I understand then, a Jetson dev kit comes in a form where it has a bunch of tools etc installed already, and one can plug in a monitor, keyboard, mouse etc and just develop on it directly without using a host at all until you want to deploy a final image to proper devices at which point you’d need to do this from the host. Similarly, to update/re-flash the dev kit (or other board, such as I am going to use) I’d need the host for that. Beyond that, you than have actually cross-compiling on the host and I haven’t yet looked into exactly how that works.

Bear in mind, I only started looking at Jetson a week or so ago, so I’m kind of still figuring it all out.

I must say, this forum seems very helpful and responsive, which bodes well!

For the third party libararies, are they available in Linux? One tool you have is the package system’s search tool. Examples:

apt search botan
apt search srt

Answers are not guaranteed to be what you are actually looking for, but if they are, then this might be what you are interested in:

sudo apt-get install botan
sudo apt-get install srt-tools

It might be useful to list all packages currently on a system, but filter for the ones with the “-dev” in their name (grep and egrep are topics all on their own, but the short story is that they are regular expression filters; the filtering you are normally used to is called globbing):
dpkg -l | egrep '\-dev'

If you see a specific package and want to know which repository it came from (which is a source listed in “/etc/apt”, mentioned later), here is an example using the “bash” package:
apt policy bash

The part which is less obvious is that to compile the software against those libraries you need the “dev” packages (which contain the header files to #include). In the case of more or less standard software (ones which are available on the default servers) this isn’t usually a problem if you understand the name of the package (sounds a bit like a cliched horror moving), but some third party packages require adding their apt repository. In yet other cases a third party might make available headers for building against the library as a simple tarball package (anything archived with the “tar” tool is referred to as a tarball; the file name usually has .tar in it, but if combined with gzip or bzip2 compression, then it might look like “something.tar.gz”, “something.tar.bz2”, “something.tgz” or “something.tbz2”, or similar; .tgz is the same as .tar.gz, and similar for .tar.bz2 being the same as .tbz2).

To complicate it a bit more, there are often different official repositories based on standard use or for third parties, or maybe for licensing differences. You might want to explore the file “/etc/apt/sources.list” to see the default repository setup. This is basically a bash script, and lines starting with “#” are comments. The supplemental repositories which were added are usually in “/etc/apt/sources.list.d/” and have the same format, but keep the official sources.list clean. Other directories are for things like PGP keys.

About C++20: You might end up regressing to C++17. It depends on what is available by default on the Jetson. Note that gcc (and g++) have several standards available, and you usually have to pick via an argument to the g++ compile line if it is angccorg++`ything other than the default. Man pages typically are not installed by default on a Jetson, but you can expand those and add them (I’d have to look up the command) to see on a Jetson. Your host PC would have the man pages already. Some useful information you might copy for later use is the output from:

  • g++ --version
  • man g++

The g++ version changes with the Ubuntu release, and currently the Ubuntu 20 release is used. Around Dec. 5 of this year a new JetPack release will come out which supports an L4T release which is Ubuntu 22, and is compatible with Orin. This will probably have a new g++ release which will support more C++ extensions. However, for the moment, if we were to look at the information for specifying g++ use C++17, then start with:
man g++

Then search (it is a regular expression, not globbing) for “c++17” via this (which has some escaping in it…regular expressions are very very definitely worth your time to understand the basics of):
/c\+\+17
(the slash / key in the man pages drops into regular expression search, the “c\+\+17” search for an exact match to “c++17”…but sometimes you want case-sensitive, in which you would have to toggle to case-sensitive with the “-i” key sequence within the man page)

So far as video goes I’m the wrong guy to ask. However, your biggest issue in porting from PC to Jetson (if both use Linux) is detecting the GPU for GPU-accelerated apps. Many applications do this with the nvidia-smi application, but this is for PCI based discrete GPUs (dGPU), and does not exist for integrated GPUs (iGPU). I would expect you will be asking questions specific to video which includes detecting GPUs if you are going to use specific abilities (OpenGL/OpenGLES is not a specific ability such as CUDA version; if you use the right OpenGL release, then it should just work; other more specific features require a query of the GPU). Consider asking a separate forum thread for any given API you are interested in, especially if it is related to GPU or video. An example of a good thread on the forum is tgccorg++`o ask about requirements for using AAC encoding, and giving a short clip of the C++ code which worked in a different environment, but which you want to port.

Networking on Linux tends to be much easier to deal with than on Windows. On the other hand, some services you might need will require installing some network toolkit and kernel driver which is compliant with the Windows version. This means that although basic networking is easier, that when you go to work with services originally created for other operating systems, that this might be a more difficult question and probably also deserves its own forum thread. I can’t answer most of your services questions because they are designed for Windows (you won’t get a WIN32 API, although there might be a VM or Docker substitute).

CMake is widely available and works quite well on Linux. You won’t have an issue with that, but what CMake uses in terms of packaging of various software might be a problem (see the earlier mention of the apt package system; those are Linux packages, but sometimes they exist for both Linux and Windows without effort, while at other times there is some name translation, and yet other cases will be you porting it). CMake itself won’t be an issue.

Jetsons are entire operating systems and useful for anything a desktop Linux system is useful for. The main difference is that it uses less RAM for older Jetsons (there is plenty on an Orin NX), and always disk space is at a premium. If you have the disk space, any Orin is actually a pretty good compile device. You might want to set the power model to the max (usually “sudo nvpmodel -m 0”, but see the docs for the L4T release), and then peg the clock to max within that model (“sudo jetson_clocks” maxes within the current nvpmodel; these go away at reboot unless you’ve taken steps).

The previous paragraph reminds me of something about make that you will find useful (CMake will be similar) called the “job server”. If you are in a power model which has 12 cores, then the “-j 12” option to a Makefile gcc or g++ compile line would use 12 cores to aid building. This only helps in cases where it is marked as “independent”. I’m not very good at explaining this, but you could need to compile a lot of .o object files from .cxx source files, and each .o is its own compile. Then there is a linker stage which uses all of the .o files at the same time. If you have 12 CPU cores, then you might want to build 12 .o object files at the same time on different cores. That’s the job server. I have not set it up with CMake, but I’m sure there is an equivalent to the Makefile-j #” job server option. As long as you have enough RAM this really speeds things up (each “job” uses its own RAM…lots of jobs mean lots of RAM…but Orins tend to tackle this quite well and it can greatly speed up builds). When compile is taking too long consider lookup up the job server.

As it happens I’ve taken the approach so far of building any third-party libraries myself where the source code is available (which is the case 99% of the time).

It’s a bit unorthodox I suppose but I have generally dumped them into a support library constructed from a CMakeLists:

botan (amalgamation files)
concurrentqueue
date
fdk-aac
json
json_struct
libtelnet
libyuv
lz4
rapidfuzz
rpmalloc
sokol
sqlite3
srt
stb
srtk
sun
utfcpp

This approach has made it pretty easy to port my libraries to the various platforms I use, without fighting with the intricacies of different build systems on different platforms for each of the libraries.

There are some small downsides when updating to newer versions, but a lot of the above are either small (or even single-header only) libraries so updating is generally quite trivial. And I don’t need to always have the latest and greatest in most cases.

I already got this compiling today on Ubuntu for x86_64 without having to install any development libraries at all.

The exceptions are stuff like the NVIDIA Video SDK and the Magewell Capture SDK where they supply DLLs/sos and import libs/header files.

Any lack of support for C++20 could be a problem. I have used a fair amount of C++20 features, so I wouldn’t want to have to start rolling back some of those changes for Jetson. All other platforms support the C++20 features I use; android, iOS, macOS, tvOS and Windows.

My understanding for the video encoding and processing (scaling, etc) is that I must use the Jetson Multimedia API which appears to be some sort of extension to/built up on V4L2.

Network code should be largely covered; I have built a cross-platform asio library on the native APIs; IOCP for Windows, kqueue for Darwin platforms and epoll for Android. I believe I will be able to re-use most if not all of the epoll code for Linux, as essentially this part of the Android code is just Linux. Exception as mentioned would be service discovery as in the Windows code I am using the Win32 DnsServiceBrowse API.

So I’m less concerned about getting my dependencies working on Jetson, if the compiler is modern enough.

Once I have my hands on a developer environment built on Orin NX I will be able to figure out exactly what compiler support there is for C++20 etc.

Good to know that a higher spec Orin NX (I will have the 16GB version with a 250GB SSD) will be a reasonable development environment. Its likely I will try and just build directly on it before getting bogged down with cross-compiling then, as maybe I won’t actually need to do that.

Thank you very much for all the tips and suggestions - I will certainly refer to this once I start getting my hands dirty!

Oh the other thing was that I would be useful if I could do everything with clang rather than gcc. Is that possible on Jetson? It’s just that while I build Windows using msvc (although it compiles with clang too) all other platforms (Android + iOS/macOS/tvOS) are using the clang compiler. It was trivial to get clang-17 installed on Ubuntu 22.04 today and building the support library as mentioned above.

Is it going to be as straightforward to get a modern clang toolchain working with Jetson when they release the new Ubuntu 22.04-based version I wonder? I can use gcc at a pinch, but always nice to keep things consistent and as simple as possible.

Many cross platform developers take the approach mentioned above. Sometimes they stick to static linking, rather than dynamic linking, which increases size. This can be easier to work with though if you can stand the extra storage space. Incidentally, if you have a program which is dynamically linked, and you want to know what that program is linked to (or a library that is linked to another library), then you can use the ldd command on it. As an example, check out “ldd /usr/bin/bash”.

I don’t know how practical it is to use the C++20 features. Even on a much newer release of Linux there would likely be a need to rebuild some of that as you are doing, but on older Ubuntu 20.04 (which is the current L4T release Ubuntu version…this will change about the end of next week when it adds Ubuntu 22.04…but this is still not bleeding edge) is going to have a lot of updates needed. Keep in mind though that the libraries you link against don’t need to “support” C++20 and probably just run C; those are always linked extern "C" anyway, and your compiled program won’t care what language the libraries run. So the C++20 requirement is only for software you build and have written, and won’t matter with regard to libraries linked against (indeed, the linker does not have an concept of namespace, and anything which is C++ actually works with a name mangling concept to avoid namespace collisions).

I won’t guarantee it, but the video encoding is unlikely to care about C++20 use in your software. If you are piping to some library or application, or if you are linking to a dynamic library, then you can just use whatever is already there. On the other hand, if you are trying to build your own version of that software with C++20, you’re probably doing work you don’t need to do, and much of that software will be painful to adapt to C++20. Linking against a library from the Multimedia API tends to not care.

Sometimes applications which are specific to a platform, e.g., Android, require certain kernel features. This might be problematic as the same features may not even exist on other platforms. Installing drivers for different o/s’s which behave the same is mostly not practical without a lot of effort. Libraries which translate by using the host o/s kernel services differently, but looking uniform to the user space app, tends to work well.

I will suggest to wait for the more recent Ubuntu 22 which will come out as an L4T/JetPack release in about a week. This probably has more compiler C++20 possibilities, but I have not actually looked so I could be wrong.

As far as other compilers go I suspect anything known to work with Ubuntu 20.04 (or 22.04 next week), and designed for 64-bit ARM, will work. As soon as you got to ARM though the compiler choices (for commercial compilers) is probably more limited. I don’t know if clang will help or not…it is certainly been broadly available for a long time, but I don’t know what standards you might need, nor have I looked at standards support on clang. It is free, so you could install it and try it out (see “apt search clang | less”, and then scroll around…there are a lot of clang and bindings available to install).

As far as actual clang targeting goes, this is not something NVIDIA looks at. That’s the Ubuntu uses. So you could look at official Ubuntu docs for 22.04 and see what clang is used. Whatever that is, the next NVIDIA release would inherit this.

Yep, the support library I mentioned is a static library to keep things straightforward.

Actually, it turns out that the first three of my libraries I’ve tried to port (there are seven) are building with minimal changes on Ubuntu 22.04 x86_64 using the default GCC compiler (version 11.4), so I think I may step back from complicating things with clang on Ubuntu for now.

What I’m hoping is that by the time I’ve ported the libs I need to Ubuntu 22.04 x86_64 that Jetson will be also operating on 22.04 and I’ll be in a good position to then try and build directly on the Orin NX device with the only real difference (hopefully) being building for arm64 plus the additional step of porting my code for Jetson to use the Multimedia API necessary for video encoding.

i.e. I am hoping that if the updated Jetson OS are based on 22.04 that they will also be based on the same 11.4 compiler as I am using (albeit for arm64) and C++20 support will just be the same.

For clarification, for the video encoding I am using the NVIDIA Video SDK on Windows. This same SDK also exists on Linux but only for x86_64 (I think) but it is not the API used on Jetson. I was advised elsewhere in the forums that the Jetson Multimedia API is to be used instead to access the NVENC hardware encoder. Slightly annoying, but if that’s the only complication I’ll take that!

Sounds like you are well on your way. It probably does not matter, but I think some of the differences between x86_64 and arm64 is unrelated to architecture, and mostly related to the iGPU (versus dGPU). I think a lot of people would like a compatibility set of tools that allow query of the iGPU capabilities using the same commands a dGPU uses. Even so, there are some limitations on the arm64 GPU.

So the two libraries so far which I have needed to install to allow me to build on Ubuntu x86_64 are uuid-dev and libavahi-client-dev.

I started to look at building these manually, with a view to doing the same for arm64 but began to head down a rabbit hole, eg libavahi-client-dev then depended on libdbus-dev etc, so I’d then need to do the same for that and I wasn’t too sure when it would end.

So I started looking at this Ubuntu “multiarch” concept instead, as these are such basic libraries that they would have already been built and tested under arm64 anyway, so why am I duplicating all of that.

So after some googling etc I have done this:

sudo dpkg --add-architecture arm64

sudo nano /etc/apt/sources.list

In this file I added [arch=amd64] before each URL.

I then copied this file into /etc/apt/sources.list.d/arm64_sources.list

sudo nano /etc/apt/sources.list.d/arm64_sources.list

I then changed the [arch=amd64] to [arch=arm64] and changed the URLS from things like Index of /ubuntu to http://gb.ports.ubuntu.com/ which I have learned is where the arm64 and other ports reside.

I saved that, and then ran

sudo apt-get update

Then I installed the arm64 packages

sudo apt install uuid-dev:arm64
sudo apt install libavahi-client-dev:arm64

This all proceeded without errors. I can see the arm64 variations along with the x86_64 ones:

oliver@VM-Server-5:~$ ls /usr/lib/x86_64-linux-gnu/libuuid*
/usr/lib/x86_64-linux-gnu/libuuid.a
/usr/lib/x86_64-linux-gnu/libuuid.so
/usr/lib/x86_64-linux-gnu/libuuid.so.1
/usr/lib/x86_64-linux-gnu/libuuid.so.1.3.0

oliver@VM-Server-5:~$ ls /usr/lib/aarch64-linux-gnu/libuuid*
/usr/lib/aarch64-linux-gnu/libuuid.a
/usr/lib/aarch64-linux-gnu/libuuid.so
/usr/lib/aarch64-linux-gnu/libuuid.so.1
/usr/lib/aarch64-linux-gnu/libuuid.so.1.3.0

So this all looks reasonable, but the instructions for Jetson development talk about mounting the Jetson device drive and compiling against the libraries on its file-system directly, rather than doing things this way, so although it will perhaps allow me to get a head start by verifying I can build my libraries on Linux arm64 it might not be a viable route, i.e. if I am able to end up creating some sort of arm64 .deb file at the end of it for my application, I won’t be able transfer/install it onto the Jetson? Or will I?

Whenever you cross compile for user space you have a cross linker instead of the native linker. Linkers (including cross linkers) have a default link path. You can see what your linker sees in its current path:
ldconfig -p

You might examine the path via a command like this:
ldconfig -v 2>/dev/null | grep -v ^$'\t'

It is important to know that the cross linker and native linker will have different defaults. So you’d name the cross linker’s version of ldconfig instead of the native linker.

When manually creating links in assembler, some commands embed the linker path, and others use the default path. If you’ve performed linking, and if no explicit path is used, then you do an ordinary search in the linker path of the particular linker; you might be surprised though if a link command forces the library to be searched for in an exact spot, and that spot is not where you expect. For example, maybe on the host it is bound to “/lib/aarch64-linux-gnu”, but on the Jetson the library is in “/lib”, which would cause the library to not be found when copying to the Jetson (you could of course add a symbolic link in “/lib/aarch64-linux-gnu” to the “/lib” version). You might end up running ldd on your executable to see which library it finds. Do realize though that this does not tell you if the executable is bound to that location, it only tells you what it sees or cannot see.

If you copy your executable to the Jetson, and something is missing, then run ldd on the program. Whatever is missing, look at see its location on the host PC with “ldconfig p” (but the cross linker version, not just the system ldconfig). That would identify the need to either (A) put the library in that location on the Jetson, or (B) to adjust the linking command to not require that location. The latter is more useful since the former would be forever having you update manually every time that library has a package update.

When you mount a “sysroot” image, you have the advantage that the linker is merely performing a chroot to the image, and then using default paths. Paths do not have locations embedded. You have the double issue now that part of the libraries will be in ordinary paths on the sysroot image (the clone with libraries), and part will be in the host PC’s path (which are not found during a chroot to the sysroot image).

My suggestion is that you add those libraries on the Jetson itself. Then either clone again, or simply use rsync to update the loopback mounted clone over ssh. Then it won’t matter if the libraries are being looked for in a default location versus an exact location.

Thanks!

I have created my first arm64 test executable using the NVIDIA Jetson Linux toolchain. I had to copy some of the shared libraries from the Ubuntu multiarch lib folder (/usr/lib/aarch64-linux-gnu/) into the toolchain folder (/home/oliver/aarch64–glibc–stable-2022.08-1/aarch64-buildroot-linux-gnu/sysroot/usr/lib/)

I tried adding a target_library_path to my CMakeLists.txt to point to the multiarch folder instead of copying stuff around but it didn’t work, it seems to want both the sysroot (inside the toolchain folder) and the multiarch lib folder to be the same thing.

Maybe there is another way, I’m not too sure. I’m not using CMake’s find_library, maybe if I did that I’d have more CMake options available to me.

I don’t know if what I have done will run on a device. It links, at least!

So I flashed my device with Ubuntu 22.04/Jetpack 6.0 and managed to get everything building and my test app running, so that’s a good first step.

Now I’m trying to set up cross building.

I created a sysroot folder on the host, also running Ubuntu 22.04 and have used rsync to sync the lib, usr/include and usr/lib folders from the Jetson to the sysroot folder.

I have downloaded and unpacked the recommended toolchain (gcc-11.3) for Jetson 6 and placed in another folder.

I then specify the compiler and sysroot in my CMakePresets.json file so they are passed as a command line argument to CMake.

It fails trying to find crt1.o and other such files when doing a test compile as part of project configuration. I tried a symlink but then others appear too, endlessly.

The problem is that these files and other includes and libraries are in the multiarch subdirectory of the rsync-d sysroot, e.g. crt1.o is inside sysroot/usr/lib/aarch64-linux-gnu and for whatever reason the toolchain is not searching these sub folders.

If I manually copy (well, merge) these subfolders up a level, i.e. the contents of aarch64-linux-gnu up into the lib above (and similarly for the includes) then it all works and builds just fine.

I’m close to having a nice workflow; I can install any new libraries directly on the Jetson Device and then rsync them as part of the build script/CI, I’m just not too sure how to persuade the toolchain to search these folders too?

Perhaps this is a CMake issue? The instructions for cross compiling relate to building the samples and talk about exporting a CROSS_COMPILE variable and running make. But I’m using CMake/ninja.

Ok. It turned out I needed to do a bit of two things:

  1. Specify some additional CMAKE_C_FLAGS and CMAKE_CXX_FLAGS to add the include and library directories, e.g.

-I/home/oliver/Jetson/sysroot/usr/include/aarch64-linux-gnu
-L/home/oliver/Jetson/sysroot/lib/aarch64-linux-gnu
-Wl,-rpath-link=/home/oliver/Jetson/sysroot/lib/aarch64-linux-gnu

  1. The above alone does not allow ld to find the crt*.o files. They have to be manually copied or sim-linked to the lib folder of the sysroot.

Then, everything works, and the cross built executable ran successfully on the Jetson.

In summary, my cross-build workflow:

  • Host PC running Ubuntu 22.04 used to flash Jetson device.
  • Jetson device flashed with latest Jetpack 6.0 DP. Any additional libraries needed installed on the Jetson and verified that everything built locally on the Jetson.
  • Bootlin GCC-11-3 toolchain unpacked into home folder on host PC (call this folder [toolchain])
  • Empty sysroot folder created in home folder on host PC (call this folder [sysroot])
  • rsync used to synchronise [sysroot]/lib [sysroot]/usr/include [sysroot]/usr/lib with corresponding /lib /usr/include and /usr/lib folders on the Jetson device
  • copied [sysroot]/lib/aarch64-linux-gnu/*.o to [sysroot]/lib/ (soft sym-linking did not work)
  • CMAKE_C_COMPILER set to [toolchain]/bin/aarch64-buildroot-linux-gnu-cc
  • CMAKE_CXX_COMPILER set to [toolchain]/bin/aarch64-buildroot-linux-gnu-c++
  • CMAKE_C_FLAGS set to "-I[sysroot]/usr/include/aarch64-linux-gnu -L[sysroot]/lib/aarch64-linux-gnu -Wl,-rpath-link=[sysroot]/lib/aarch64-linux-gnu
  • CMAKE_CXX_FLAGS to set to same as above
  • CMAKE_SYSROOT set to [sysroot]

I guess I’m kind of talking to myself now at this stage but hopefully this all might be useful for someone!

1 Like