Jetson-multimedia-api sample cross compile instructions

These instructions have me baffled:
https://docs.nvidia.com/jetson/l4t-multimedia/cross_platform_support.html
I have followed the spirit of the instructions (because I couldn’t get flash.sh to work, as suggested) by instead using :
sudo dd if=/dev/sda1 of=./system.img
on my Jetson TX2,
where ‘sda1’ is the partition onto which ‘jetson_multimedia_api’ has been successfully installed and several of the projects successfully ‘made’ and executed. The dd command is issued in a directory on an empty partition large enough to accommodate the 64GB image file.
After transferring the image file to the computer from which I wish to cross compile to the TX2 and completing the instructions, I get a result that isn’t even a tiny bit surprising - I was just kind of hoping something miraculous would happen, I guess.
Below is documented what happens when I complete the instructions. at the above link. It is complete and self explanatory.
I have two questions:

  1. What is the work flow for what is being called “cross compiling” in this (Jetson) context? I find no recent description.
  1. What is missing from the instructions? I can’t imagine how this type of ‘cross-compiling’ is supposed to work and have been stumped for a long time trying one stupid idea after another.
    Thanks,
    Rusty
    +++++++++++++++++++++++++++++++++++++++++++++++++++++++
    boyd@apricot:~$ echo $PATH
    /media/boyd/data/usr/local/cuda-10.2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
    boyd@apricot:~$ echo $TARGET_ROOTFS
    /media/boyd/data
    boyd@apricot:~$ echo $LD_LIBRARY_PATH
    /media/boyd/data/usr/local/cuda-10.2/lib64
    boyd@apricot:~$ echo $CROSS_COMPILE
    aarch64-linux-gnu-
    boyd@apricot:~$ cd /media/boyd/data/home/rusty/jetson_multimedia_api/samples/13_multi_camera
    boyd@apricot:/media/boyd/data/home/rusty/jetson_multimedia_api/samples/13_multi_camera$ sudo make
    [sudo] password for boyd:
    Compiling: main.cpp
    make[1]: Entering directory ‘/media/boyd/data/home/rusty/jetson_multimedia_api/samples/common/classes’
    Compiling: NvElementProfiler.cpp
    Compiling: NvElement.cpp
    Compiling: NvApplicationProfiler.cpp
    Compiling: NvVideoDecoder.cpp
    Compiling: NvDrmRenderer.cpp
    Compiling: NvJpegEncoder.cpp
    Compiling: NvVideoConverter.cpp
    Compiling: NvBuffer.cpp
    Compiling: NvLogging.cpp
    Compiling: NvEglRenderer.cpp
    Compiling: NvUtils.cpp
    Compiling: NvJpegDecoder.cpp
    Compiling: NvVideoEncoder.cpp
    Compiling: NvV4l2ElementPlane.cpp
    Compiling: NvV4l2Element.cpp
    make[1]: Leaving directory ‘/media/boyd/data/home/rusty/jetson_multimedia_api/samples/common/classes’
    Compiling: /media/boyd/data/home/rusty/jetson_multimedia_api/argus/samples/utils/Thread.cpp
    Linking: multi_camera
    /usr/lib/gcc-cross/aarch64-linux-gnu/7/…/…/…/…/aarch64-linux-gnu/bin/ld: cannot find -lv4l2
    /usr/lib/gcc-cross/aarch64-linux-gnu/7/…/…/…/…/aarch64-linux-gnu/bin/ld: cannot find -lEGL
    /usr/lib/gcc-cross/aarch64-linux-gnu/7/…/…/…/…/aarch64-linux-gnu/bin/ld: cannot find -lGLESv2
    /usr/lib/gcc-cross/aarch64-linux-gnu/7/…/…/…/…/aarch64-linux-gnu/bin/ld: cannot find -lX11
    /usr/lib/gcc-cross/aarch64-linux-gnu/7/…/…/…/…/aarch64-linux-gnu/bin/ld: cannot find -lnvbuf_utils
    /usr/lib/gcc-cross/aarch64-linux-gnu/7/…/…/…/…/aarch64-linux-gnu/bin/ld: skipping incompatible //usr/local/cuda/lib64/libnvjpeg.so when searching for -lnvjpeg
    /usr/lib/gcc-cross/aarch64-linux-gnu/7/…/…/…/…/aarch64-linux-gnu/bin/ld: cannot find -lnvjpeg
    /usr/lib/gcc-cross/aarch64-linux-gnu/7/…/…/…/…/aarch64-linux-gnu/bin/ld: cannot find -lnvosd
    /usr/lib/gcc-cross/aarch64-linux-gnu/7/…/…/…/…/aarch64-linux-gnu/bin/ld: cannot find -ldrm
    /usr/lib/gcc-cross/aarch64-linux-gnu/7/…/…/…/…/aarch64-linux-gnu/bin/ld: cannot find -lcuda
    /usr/lib/gcc-cross/aarch64-linux-gnu/7/…/…/…/…/aarch64-linux-gnu/bin/ld: skipping incompatible //usr/local/cuda/lib64/libcudart.so when searching for -lcudart
    /usr/lib/gcc-cross/aarch64-linux-gnu/7/…/…/…/…/aarch64-linux-gnu/bin/ld: cannot find -lcudart
    /usr/lib/gcc-cross/aarch64-linux-gnu/7/…/…/…/…/aarch64-linux-gnu/bin/ld: cannot find -lnveglstream_camconsumer
    /usr/lib/gcc-cross/aarch64-linux-gnu/7/…/…/…/…/aarch64-linux-gnu/bin/ld: cannot find -lnvargus_socketclient
    collect2: error: ld returned 1 exit status
    Makefile:60: recipe for target ‘multi_camera’ failed
    make: *** [multi_camera] Error 1
    boyd@apricot:/media/boyd/data/home/rusty/jetson_multimedia_api/samples/13_multi_camera$

Hi,
Please download system image through SDKManager and you can see flash.sh in

~/nvidia/nvidia_sdk/JetPack_4.6_Linux_JETSON_TX2_TARGETS/Linux_for_Tegra

Please execute the command in the directory:

$ sudo ./flash.sh -r -k APP -G <clone> <board> mmcblk0p1

The little 32GB SD card in the TX2 won’t hold a full development system, will it (the image I use is 64GB and is 85% filled0? Also, how would this fix the missing libraries registrations? Of course, they aren’t missing, they just aren’t registered under the host OS. This is true of ANY drive image that is mounted, isn’t it? Or is there a Linux trick I don’t know?
Thanks,
R

Hi,
The steps are for building the samples on x86 host PC. Not sure but it seems like you execute the steps on TX2? This is not correct. For building the samples on Jetson platforms, you can go to the directory and build the samples directly.

/usr/src/jetson_multimedia_api

You don’t seem to understand my questions.
Thanks anyway.

Hi,
We probably misunderstand the questions. Form the log it looks like TARGET_ROOTFS is not correctly set. Please check if you can see prebuilt libs by executing the commands:

$ ls $TARGET_ROOTFS/usr/lib/aarch64-linux-gnu/tegra

Should see prebuilt libs like libnvbuf_utils.so, libnvjpeg.so. If you don’t see the libs, please try to re-clone rootfs from TX2 and re-mount to the host PC.

Please also make sure you download correct tool chain for r32 releases:

$ ls gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu/
aarch64-linux-gnu  gcc-linaro-7.3.1-2018.05-linux-manifest.txt  lib      share
bin                include                                      libexec

When running from the environment given in the link given in the original post, yields:
ls $TARGET_ROOTFS/usr/lib/aarch64-linux-gnu/tegra
ld.so.conf libnvdla_compiler.so libnvidia-glvkspirv.so.32.5.1 libnvphs.so
libcuda.so libnvdla_runtime.so libnvidia-glvkspirv.so.32.6.1 libnvpva.so
libcuda.so.1 libnvdsbufferpool.so libnvidia-ptxjitcompiler.so.1 libnvrm_gpu.so
libcuda.so.1.1 libnvdsbufferpool.so.1.0.0 libnvidia-ptxjitcompiler.so.440.18 libnvrm_graphics.so
libdrm.so.2 libnveglstream_camconsumer.so libnvidia-rmapi-tegra.so.32.5.1 libnvrm.so
libgbm.so.1 libnveglstreamproducer.so libnvidia-rmapi-tegra.so.32.6.1 libnvscf.so
libGLX_nvidia.so.0 libnveventlib.so libnvidia-tls.so.32.5.1 libnvtestresults.so
libnvapputil.so libnvexif.so libnvidia-tls.so.32.6.1 libnvtnr.so
libnvargus.so libnvfnet.so libnvid_mapper.so libnvtracebuf.so
libnvargus_socketclient.so libnvfnetstoredefog.so libnvid_mapper.so.1.0.0 libnvtvmr.so
libnvargus_socketserver.so libnvfnetstorehdfx.so libnvimp.so libnvv4l2.so
libnvavp.so libnvgbm.so libnvisp_utils.so libnvv4lconvert.so
libnvbuf_fdmap.so.1.0.0 libnvgov_boot.so libnvjpeg.so libnvvulkan-producer.so
libnvbufsurface.so libnvgov_camera.so libnvll.so libnvwinsys.so
libnvbufsurface.so.1.0.0 libnvgov_force.so libnvmedia.so libsensors.hal-client.nvs.so
libnvbufsurftransform.so libnvgov_generic.so libnvmm_contentpipe.so libsensors_hal.nvs.so
libnvbufsurftransform.so.1.0.0 libnvgov_gpucompute.so libnvmmlite_image.so libsensors.l4t.no_fusion.nvs.so
libnvbuf_utils.so libnvgov_graphics.so libnvmmlite.so libtegrav4l2.so
libnvbuf_utils.so.1.0.0 libnvgov_il.so libnvmmlite_utils.so libv4l2_nvargus.so
libnvcameratools.so libnvgov_spincircle.so libnvmmlite_video.so libv4l2_nvcuvidvideocodec.so
libnvcamerautils.so libnvgov_tbc.so libnvmm_parser.so libv4l2_nvvidconv.so
libnvcam_imageencoder.so libnvgov_ui.so libnvmm.so libv4l2_nvvideocodec.so
libnvcamlog.so libnvidia-eglcore.so.32.5.1 libnvmm_utils.so libv4l2.so.0
libnvcamv4l2.so libnvidia-eglcore.so.32.6.1 libnvodm_imager.so libv4lconvert.so.0
libnvcapture.so libnvidia-egl-wayland.so libnvofsdk.so libvulkan.so.1
libnvcolorutil.so libnvidia-egl-wayland.so.1 libnvomxilclient.so libvulkan.so.1.2.141
libnvcuvidv4l2.so libnvidia-fatbinaryloader.so.440.18 libnvomx.so nvidia_icd.json
libnvdc.so libnvidia-glcore.so.32.5.1 libnvosd.so weston
libnvddk_2d_v2.so libnvidia-glcore.so.32.6.1 libnvos.so
libnvddk_vic.so libnvidia-glsi.so.32.5.1 libnvparser.so
libnvdecode2eglimage.so libnvidia-glsi.so.32.6.1 libnvphsd.so
That command doesn’t work with the host’s native environment, which is to be expected, correct?
ldconfig.real only caches system directories for the running system, right? How is the target cache supposed to get built or what is supposed to work in it’s place at link time? I’ve tried generating it manually but ldconfig seems to only work on the running OS. And if I try booting the host in the target environment, it, not all surprisingly, causes other problems.

As to your second question, are you suggesting that SDKmanager can’t be trusted to install the selected software versions? Because that’s all I’ve found that works correctly.

Let me ask you a question, are you able to successfully execute the instructions given at the link, above? If so, what do you get when you ‘ldconfig -p’ on the host? Are gstreamer, EGL, Argus, OpenCV, v4l2, etc, all present in the output?

Thanks

Hi,
We are able to follow the steps and set up environment on host PC. Please try to clone the image again and check if you are able to generate the files.

$ sudo ./flash.sh -r -k APP -G r3261.img jetson-tx2 mmcblk0p1
user@user:~/nvidia/nvidia_sdk/JetPack_4.6_Linux_JETSON_TX2_TARGETS/Lin
ux_for_Tegra$ ll r3261.img*
-rwxr-xr-x 1 root root 14788465108 十一 18 13:47 r3261.img*
-rw-r--r-- 1 root root 30064771072 十一 18 14:05 r3261.img.raw

And then follow the steps to mount r3261.img.raw.

No, the instructions fail to build, ending with :
“/bin/sh: 1: aarch64-unknown-linux-gnu-g++: not found”
HOWEVER, if one executes the suggested changes here and then uses the following command string to build the (cross) target (because exporting the build variables in the terminal session from which ‘make’ is launched does not work),
" sudo make TARGET_ARCH=aarch64 BUILD_TYPE=debug TARGET_ROOTFS=$HOME/jetson"
main.cpp will compile but once again, the linker fails, which is the result I got using ‘dd’ on the target and modifying files and instructions as I state. But it’s not the exact result, so maybe a little progress can be claimed? To wit:

 "boyd@apricot:~/jetson/home/jetson/jetson_multimedia_api/samples/13_multi_camera$ sudo make TARGET_ARCH=aarch64 BUILD_TYPE=debug TARGET_ROOTFS=$HOME/jetson
Compiling: main.cpp
Compiling: /home/boyd/jetson/home/jetson/jetson_multimedia_api/argus/samples/utils/Thread.cpp
Linking: multi_camera
/usr/lib/gcc-cross/aarch64-linux-gnu/7/../../../../aarch64-linux-gnu/bin/ld: skipping incompatible /home/boyd/jetson//usr/local/cuda/lib64/libnvjpeg.so when searching for -lnvjpeg
/usr/lib/gcc-cross/aarch64-linux-gnu/7/../../../../aarch64-linux-gnu/bin/ld: skipping incompatible /home/boyd/jetson//usr/local/cuda/lib64/libcudart.so when searching for -lcudart
/usr/lib/gcc-cross/aarch64-linux-gnu/7/../../../../aarch64-linux-gnu/bin/ld: cannot find -lcudart
collect2: error: ld returned 1 exit status
Makefile:60: recipe for target 'multi_camera' failed
make: *** [multi_camera] Error 1


As a very important aside, your suggested use of flash.sh works for BOTH the internal SD card AND the external SSD (NVME) partition! The trick is the image file name, ‘r3261’. Also, one must change the file system name of the partition to be copied to ‘APP’ (this implies one must change the name of the internal SD card, first). The 64 GB SSD partition is built and transferred about 4X faster than the 28GB internal SD card.
Both partitions give exactly the same results I’ve described and respond in exactly the same way to the modifications I’ve tried. Again, a little progress.
Any idea what else could be different between your environment and mine?

Thanks,

Hi,
We suggest re-flash the TX2 with SDKManager. To flash both system image and SDK components so that you can have a complete rootfs. And then run flash.sh to generate img.raw.

That didn’t do it, either (tried it 2 days ago). However, I have successfully cross-compiled the multi-camera sample but I had to make more changes that frankly, I don’t understand (some problems were fixed using troubleshooting techniques rather than from knowledge). I’ll post more when I’ve run a few more tests and have time to arrange my notes into something others can use.
In the mean time, have a look [here] back at a Jetpack 3.0 discussion (JetPack 3.0 tegra-multimedia-api sample cross compile issue - #3 by NobodyHere) for an old problem fix dating back to 2017 for a problem that still lurks in Rules.mk for the multimedia samples.

Here is what I have verified, along with references, when available. In a nutshell, I can cross-compile multimedia samples from any drive/partition on my TX2 with

' sudo ./flash.sh -r -k APP -G 'clone filename'.img jetson-tx2 'target partition name''

The filesystem name of the target partition being cloned must be ‘APP’, which means one must change names as needed for this method. One can also create a partition image, as long as the partition is not mounted,
or with
‘sudo dd if=‘input partition name’ of=‘output filename’’
executed on the target, meaning you must have more than one Ubuntu partition and know how to boot the TX2 to them… One must also transfer the file to the host, which is done automatically with the ‘flash.sh’ method. Only the ‘.raw’ image is used.
Keep in mind that either method requires enough free disk space to “absorb” the full size of the partition being cloned, even if only a few megabytes are actually used by files. You don’t want to run out of disk space!
The next step is to mount the partition that has been cloned. Again, there are two ways (at least) to get that done.
I have been able to detect no difference in performance between the two ways .

'sudo mount -t ext4 'clone filename' 'mount point''

or

'mount -o loop 'clone filename' 'mount point''

This ‘mount point’ is TARGET_ROOTFS . ‘exporting’ it is not working on my host, it’s a mystery I haven’t had time to pursue. Because of this, after navigating to the source directory of a sample, I type:
'

sudo make TARGET_ARCH=aarch64 BUILD_TYPE=debug/release TARGET_ROOTFS=‘image mount point’

Pick one value for BUILD_TYPE.
Additionally, I have to make a couple of changes in the Rules.mk file in the ‘jetson_multimedia_api/samples’ directory. First, make the change documented here.

then find

 'CUDA_PATH 	:=/usr/local/cuda"

and change it to:

'CUDA_PATH 	:=/usr/local/cuda-10.2'

I am really baffled by this last one. Of course, ‘Rules.mk’ works just fine, as delivered, on the TX2.
One little tip - keep your ‘jetson_multimedia_api’ directories and any other sources on a local hard drive. This way, you won’t accidentally delete your work when unmounting the image file. Remember, nothing gets saved to the image file.
Thanks to DaneLLL for suggestions that led to a solution, for my ayatem.

1 Like

None of what follows in this thread will get cross-compiling completely working on my system, which is as required in the Jetson documentation.
The ONLY thing that needs to be done is to follow the instructions, here:
https://elinux.org/Jetson/Filesystem_Emulation/Emulating_Jetson_Filesystem_with_QEMU_and_CHROOT

ALL the problems described, below (and others not shared), vanish but this method has its limits. Nvidia doesn’t provide an Nsight Eclipse Edition that will install on the TX2, so there is no IDE that will run in the emulated system. Similarly, VSCode will not work in the emulated volume because it makes system calls that QEMU can’t handle.
I will post a new thread in the next few days when I verify a useful work flow, beyond using a text editor with no Nsight support.
The biggest challenge is that all these development tools need to run in the emulated environment, or it’s back the the madness I have partially documented, below.
Again, despite claims to the contrary at the end of this thread, a complete build system that would allow Argus samples to compile and link was never achieved.
I am still baffled at the claims that any of the Nvidia multimedia cross-compiling instructions work, as given, on bare metal NOT maintained by Nvidia’s IT department. In MY SIX MONTH search for a solution, I have verified that the target environment is ABSOLUTELY REQUIRED on the host for cross-compiling. Duh. There is nothing in the Nvidia instructions, as I understand them, that meets that requirement.
More, soon,
Rusty

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.