Jetson TX1 with L4T 23.1 - Doesn't support native aarch64 binaries?

Just got my JTX1 dev kit today. Reflashed it with L4T 23.1, and it’s up and running. However, the default GCC generates armv7 code. It doesn’t even accept “-mcpu=cortex-a57” option.

Used the Linaro aarch64 compiler to cross compile a simple test program, and uploaded to the device. However, it doesn’t even load.

With uname -a, it does indicate aarch64 is in use by the kernel.

$ uname -a
Linux tegra-ubuntu 3.10.67-gcdddc52 #1 SMP PREEMPT Mon Nov 9 13:16:26 PST 2015 aarch64 aarch64 aarch64 GNU/Linux

So anybody can shed some light?

I don’t have a TX1 to experiment with yet, but unpacking reveals gcc-4.8, similar to the TK1. So far as linaro cross-compile tool chains go, they’ve been putting a lot of work into AARCH64 for the 5.x series…I’m wondering, if you enable required repositories, and do apt search for gcc, is there a 5.x listed?

It’s not really about the toolchains. The L4T 23.1 doesn’t even include the aarch64 binary loader.

Check /lib folder, there is only one ld-linux-armhf.so.3 which points to /lib/arm-linux-gnueabihf/ld-2.19.so.

So looks like only the kernel is aarch64. All user land applications are still 32-bit only, which was a surprise to me.

There are probably packages available, as CUDA 7 (which is used on TX1) is pure 64-bit. The question would be what needs to be added from where to get the 64-bit.

Today the sample filesystem included with JTX1 is Ubuntu 14.04 armhf (32-bit). It seems there was currently reduced distro support and userspace packages available for aarch64, so armhf userspace was provided at launch. As linuxdev mentions, there may be packages available in the repo to provide compatibility as Ubuntu brings them online. The kernel is still aarch64.

With TX1 we hope to see additional filesystems and distro’s emerge, similar to TK1: http://elinux.org/Jetson_TK1#Linux_distributions_running_on_Tegra

Will static Aarch64 binaries run on the kernel in L4T 23.1?
Any pointers to the dynamic loader, glibc and basic libraries in packages?

@dusty_nv Does NVIDIA not think this information is relevant to people who would potentially purchase the TX1? Maybe it’s out there and I didn’t see it. But it would’ve been nice to see this information more obvious up front somehow.

I think if you dpkg --add-architecture arm64, they become available (probably will nuke your install though):

sudo dpkg --add-architecture arm64
sudo apt-get update
sudo apt-get install libc6:arm64 binutils:arm64 cpp-4.8:arm64 gcc-4.8:arm64

Looks like it wants to remove the gnueabihf support when I try this, trying to figure out associated multiarch commands related to above for side-by-side install. Maybe linuxdev will be able to shed some light after he tinkers around with JTX1.

Although I attempted circulation of the info through various outlets at launch, I do understand where you are coming from (initially I didn’t even understand it was possible to simultaneously have 64-bit kernel and 32-bit filesystem!), as the Buy pages are usually focused on the hardware, with software support being continuously under development (as it continues also with JTK1). aarch64 root filesystem for JTX1 is planned on our roadmap, although we can’t commit to the timing yet as vendor support for arm64 distro’s is continuously evolving. Apologies for the difficulty.

Thanks for the hint. Will try tomorrow. I don’t mind messing up the installation.

Confirmed. Just followed the instructions from dusty_nv, and my aarch64 binaries cross-compiled from Ubuntu work like a charm.

Even though the cuda-7.0 was removed from the file system, the previously compiled VisionWorks and GameWorks samples still run properly.

Rebooted the board and Ubuntu started up without an issue.

Didn’t try other things.

Thanks Dusty!

If I only install libc6:arm64 libstdc++6:arm64 libgomp1:arm64, it doesn’t remove cuda, and I can still run my cross-compiled apps. But I haven’t been successful in running CUDA apps. The problem is that I cannot find libcuda for arm64. Is that available somewhere?

I am unable to find the userspace portion of the GL and CUDA drivers to use when booting an arm64 userland ( based on ubuntu 15.10 ARM64).

The R23.2.2 package seems to be available only for armhf. Having a full 64 bit userland is kind of pointless without these drivers in my opinion.

Similar problem here, I am trying to run a Java JVM (32 bit) on the TX1. The result is that the JVM throws the following error: “Error occurred during initialization of VM Server VM is only supported on ARMv7+ VFP” obviously because the kernel is aarch64.

Running the Armv8 JVM won’t work as it is no recognized binary…

Has anyone had luck adding 64 bit support to the 32 bit userland?

Got java working now, 8.0.60 will not work, while 8.0.72 does.

Guys

I wish I had found this thread before wasting 4 hours wondering why I could not compile a kernel on my Jetson board ;-).

I think Nvidia should make it MUCH clearer how GCC is setup on the Jetson. It was not at all clear that the user-space is 32 bit while only the kernel is 64 bit. It took me some digging to determine the issue.

To remedy this I guess there are (at least) 3 options.

  1. Compile the kernel on a host machine using a cross-compile to aarch64 and then install said kernel on the Jetson.

  2. Update (or add) an aarch64 GCC on the Jetson board. I am not sure if this trashes the install on that board but there are some other threads that discuss this.

  3. Install a arm->aarch64 cross compiler on the Jetson.

Cheers

Stephen

PS Any feedback on the best out of 1-3 above appreciated!

I’m trying to compile 3rd party software that expects to find /usr/local/cuda/lib64, which is not available with the default install of CUDA on the Tegra X1.

Would cross-compilation help me in this situation? If so, are there special steps I need to be aware of for deploying software linked against 64 bit CUDA libraries to the TX1? If this is not relatively straight-forward, does anyone know of an update to the time table for providing 64 bit CUDA support?

Thank you for any tips or insights.

That location tends to be for a non-Ubuntu desktop install. This is the case for desktop CUDA install via a “.run” script instead of a package manager. In an example case, I use Fedora, and CUDA is out of date versus modern Fedora…CUDA is packaged directly for up to Fedora 21, but currently Fedora 23 has been out for awhile. The run script for version 7.0 CUDA gets put in “/usr/local/cuda-7.0”, while version 7.5 gets put in “/usr/local/cuda-7.5”. Whichever version is being used, the script then adds a symbolic link “/usr/local/cuda” which points at either the 7.0 or the 7.5 entry.

If your 3rd party software expects that directory, I suspect the software is designed for the desktop x86_64 and not ARM. This would not guarantee you couldn’t get the software working, but it does mean installation and compile won’t be as simple as it should be (and failure is still possible for other reasons). One important complication which could stop build is that currently although CUDA on JTX1 is 64-bit, the user space is 32-bit (this will be 64-bit on the next L4T release). If the software you are working with is ok with 64-bit libs in a 32-bit link environment, you can proceed.

The proper way to fix compile (assuming 32-bit linking won’t hurt) would likely be to first see if the software has a configure type script which can name an alternate location for the CUDA install. Barring this, perhaps a symbolic link “cuda” could be added in “/usr/local” which points at “/usr/lib/arm-linux-gnueabihf”, but odds of that working are low.

If there is more information on the software you are compiling a more specific answer might be possible.

Any ideas when this “next L4T release” will be? It would be nice to have at least the xorg driver in 64-bit…

I found this thread after seeing errors when trying to make Duo3D’s kernel (https://duo3d.com/docs/articles/install-arm) in order to use my Duo3D MLX camera.

Specifically I was seeing the error:

“gcc: error: unrecognized command line option ‘-mgeneral-regs’only’”

I did some research and found this thread. I performed the steps that dusty_nv suggested (including installing libc6, binutils, cpp-4.8, and gcc-4.8 for arm64). However, then when I performed ‘make -j4’, I got the error:

“/bin/sh: 1: gcc: not found”

Does anyone know how I can circumvent this?

Thanks!

There is a purely 64-bit L4T release coming out soon, but for now you are better off cross-compiling on a standard Linux desktop host. See:
https://devtalk.nvidia.com/default/topic/901677/jetson-tx1/building-tx1-kernel-from-source/post/4749300/#4749300

The R23.x releases are 32-bit user space and 64-bit kernel space…thus kernel compiles require a 64-bit tool chain and a 32-bit compiler. I use the most recent Linaro 5.2 cross tool chain for kernel builds. If those are not available via your package manager, see:
https://releases.linaro.org/components/toolchain/binaries/

Look for aarch64-linux-gnu (64-bit) and arm-linux-gnueabihf (32-bit).