Compile Error ‘-m64’ & ‘-mssse3’

Any ideas how we can SOLVE this issues, both are common.
g++: error: unrecognized command-line option ‘-m64’
g++: error: unrecognized command-line option ‘-mssse3’

fix for aarch64 (ARM 64-bit arch) by moslemk · Pull Request #3544 · Theano/Theano

g++ -m64 -march=native -mtune=native -mssse3 -Wall -Wextra -Wno-deprecated-copy -Ofast -ftree-vectorize -flto -c oldbloom/bloom.cpp -o oldbloom.o
g++: error: unrecognized command-line option ‘-m64’
g++: error: unrecognized command-line option ‘-mssse3’
make: *** [Makefile:2: default] Error 1

I am curious, which g++ are you using?
g++ --version

Also, the URL you gave is of interest, but is related to some Python content. Is the code being compiled related to some package, or is it your code? It might be good to know the context. I am assuming it is running on a Jetson, so the L4T release might also be useful to know; see “head -n 1 /etc/nv_tegra_release”. Information on where to find documentation for the code being compiled (in the case of some package) would help; otherwise anything for reproducing this would help.

g++ (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
https://brew.sh/ Installed
Old Architectures not recognized…
Packages have been archived by Ubunut.

Homebrew 4.5.8

Question ? Does this Module slot into Jetson AGX Dev. ?
Jetson Orin NX (ONX)16GB: 2x NVDLA

Yes, it does but you need to RUN the SDK Manager to set it up properly.
How to install jetson SDK components after flash jetson os image - Jetson & Embedded Systems / Jetson AGX Xavier - NVIDIA Developer Forums

I was unable to find the -m64 option in the install.sh script. So whatever is doing this is part of some other subset of the code (I did not want to actually install on my computer so I was just reading the source code). I have a suggestion though to get useful debug output. I will preface by first saying that something I read seems to think the -m64 option might be occurring in a line which also is for the Mac Power CPU architecture; if that is the case, then it should be easy to remove that option in one of the scripts.

Start by downloading the install.sh using wget (if you don’t have this, then “sudo apt-get install wget”). Use this to download to an empty directory somewhere:
wget https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh

You can execute this without sudo to start (it might require sudo, but let’s see what errors we see first without that) and create a log file which you can attach to this forum:
/bin/bash -c install.sh 2>&1 | tee log_install.txt

You’ll see that run, and all content will also go into that log file. Attach the log file to the forum. Now also look in the empty directory; are there any new files? Depending on what happens we might also run this with this, but hold off on that for now:
sudo /bin/bash -c install.sh 2>&1 | tee log_sudo_install.txt

I want to see the detailed compile error. If it gives the compile line with -m64, then we can track that down and see if it also has something on that line incompatible with 64-bit ARM.

Missing Packages..

Sudo apt-get install wget https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh

Reading package lists… Done
Building dependency tree… Done
Reading state information… Done
E: Unable to locate package https://raw.githubusercontent.com/Homebrew/install/HEAD
E: Couldn’t find any package by glob ‘https://raw.githubusercontent.com/Homebrew/install/HEAD
E: Couldn’t find any package by regex ‘https://raw.githubusercontent.com/Homebrew/install/HEAD

Yes, understood.
I want to see the detailed compile error. If it gives the compile line with -m64

Look like we have to install full SDK
https://docs.prophesee.ai/stable/installation/linux.html $
https://forums.developer.nvidia.com/t/make-1-no-rule-to-make-target-makefile-para/2979

Install full SDK

Installing the SDK Manager requires a provided JUMPER, for the DEV kit.
https://youtu.be/CmDaXah4l5g?si=kaDkPPE_YOpUlolr

Was not provided with the DEV kit, so I have find one.

Jumper Missing in Jeston Nano DEV KIT
Recovery mode FC and REC

Just making notes as I read…

The above is in error. The install.sh file is not a package and apt-get is not used with that file. All you want is a local copy of the install.sh, and this is retrieved via:
wget https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh
(there is neither sudo nor apt-get in the above)

Incidentally, my dev computer died (probably a power supply…there was a loud “pop” from it), so my ability to test is minimal (I’m using a very old and underpowered computer in the garage).

Some information on the jumper you might find useful:

  • This is like a shift key when trying to type a capital letter. You only need to hold the recovery pin to ground during either a power up or a power reset. There is no need to hold this short.
  • The recovery pin itself sits next to a ground pin. The jumper merely takes the recovery pin to ground when applied. Although I don’t recommend it, something as simple as a small screwdriver holding the jumper while turning on power will work.
  • The header has 0.1" spacing (2.54 mm).
  • During actual flash there comes a time when the Jetson automatically reboots. This occurs when actual flash completes, and then optional software (e.g., CUDA) gets installed to a normally booted Jetson over ssh; thus, the recovery pin must not be grounded at that point.

I saw on one URL that your software is supported by Ubuntu 22.04 or 24.04. L4T R36.x corresponds to Ubuntu 22.04, and is flashed using JetPack/SDK Manager 6.x. You can reach the right flash software for Ubuntu 22.04 (L4T R36.x here:
https://developer.nvidia.com/linux-tegra

JetPack 5.x installs L4T R35.x, which is Ubuntu 20.04, and would be insufficient according to the URL you gave. Normally each JetPack release is associated with a given L4T release, although one can see older releases if desired.

Would you have a READY, ISO with the SDK Manager up and running?
(Double check if it is not a hardware fault.)

  • I have bought two new microSD 128 GB.
  • I have two Jetson Nano Super Dev Kits. TESTING…1…2…3
  • Installed both versions available.
    JetPack 5.1.3 SD - Works well and is running with my C++ Code (No Jetpack)
    JetPack 6.2. SD - Works well and is running with my C++ Code (No Jetpack)
  • C++ Code is working well, no issues. (g++ Legacy)

Having a HARD time getting it to RUN.
Terminal: SDK Manager (With Jumper, local computer shop gave me one.)

Followed the instruction to the Letter, took my time.

The SDK Manager, in terminal, Windows never pops up.

I will upgrade the hardware to a THOR - 128 Gbyte, as I need to maximize on the RAM, just need to prove that it works.

Install CUDA SDK.
Depending on the CUDA SDK version and on your Linux distribution you may need to install an older g++ (just for the CUDA SDK). Edit the makefile and set up the good CUDA SDK path and appropriate compiler for nvcc.

CUDA       = /usr/local/cuda-8.0
CXXCUDA    = /usr/bin/g++-4.8

You can enter a list of architecture (refer to nvcc documentation) if you have several GPU with different architecture. Compute capability 2.0 (Fermi) is deprecated for recent CUDA SDK. Source code needs to be compiled and linked with a recent gcc (>=7). The current release has been compiled with gcc 7.3.0.
Go to the source code directory. ccap is the desired compute capability.

$ g++ -v
gcc version 7.3.0 (Ubuntu 7.3.0-27ubuntu1~18.04)
$ make all (for build without CUDA support)
or
$ make gpu=1 ccap=20 all

Otherwise, the CPU and Memory is working very well. No Issues.

Wow ! I am impressed it faster than my Dell Power Edge server and
Wow ! 30 Watts. Wow !

OPENCV - Same issue as I do, No Jetpack

SDK Manager Setup.

I will prepare a ZIP or ISO a quick fix to avoid spending to much time sorting it all out, unless some comes up with it before me ! (I prefer coding )

Rufus - Create bootable USB drives the easy way ISO setup.

I’m somewhat confused as to what the situation is. I’m going to provide some information about how all of this fits together, which might point us in another direction (or not).

First, the Orin Nano comes in two broad categories: Dev kits, which have an SD card mounted on the module itself without any eMMC, and commercial module which is used on any carrier board such that the module has eMMC, but no SD card slot (the slot, if any, is on the carrier board for commercial versions). I’m assuming it is a dev kit, that seems to already be established.

The dev kit has QSPI memory on the module itself, and when you flash an SD card model of Jetson, it is the QSPI you are flashing. QSPI is updated in recovery mode using JetPack/SDK Manager which runs on a separate Ubuntu PC (depending on releases you would need either Ubuntu 20.04 or Ubuntu 22.04; Ubuntu 24.04 would fail). That QSPI is what runs all of the boot stages prior to the SD card loading (the SD card is the actual operating system; you can’t get to the SD card if the QSPI is not correct).

If you have an SD card, and you can mount it on a Linux machine (it doesn’t have to be the Jetson), then you can go to the mount point to see the SD card content. From there you can cd in to <mount point/etc/ and examine the output of “head -n 1 nv_tegra_release”. This tells you the L4T release.

L4T is what actually gets flashed, and is what you would call Ubuntu once NVIDIA content is applied. This includes what gets flashed to the QSPI of the module. If the major release of L4T is R35.x, then it is expected that you have already flashed the QSPI with R35.x. If the major release of L4T is R36.x, then it is expected that you have already flashed the QSPI with R36.x. Reaching the SD card depends on this QSPI. If you mismatch QSPI boot content major release with a different major release of the SD card, then boot will fail.

If you have an SD card prebuilt image, e.g., from one of the labs, then this should work if and only if its QSPI content is already flashed to a compatible major version.

If you boot to an NVMe or SSD, then there are more flash steps required, and those steps change what is in the QSPI (otherwise boot won’t reach the correct device).

If the major release in QSPI matches any prebuilt image, then a correct application of that image to the SD card should “just work” (assumes you are not using an external device like NVMe). It is easy to download a binary image and apply it to the SD card so long as the SD card is large enough.


Now comes the part where you’re getting to the AI lab. If you have an image which you’ve applied to the SD card, and it boots, then you don’t need to flash further. If you are having trouble from a fully booted system because the lab needs some other software, then flashing is not going to help…you already have the QSPI for that release of SD card.

The software related to the original install.sh is not from NVIDIA. Bootable USB drives do not work with a Jetson, at least not ones which are “standard”. Jetsons do not have a hardware BIOS, and they cannot do all of the things with bootable media which a regular computer does (this is why a separate host PC is needed to flash the QSPI). Most Linux 64-bit ARM code can in fact run on a Jetson though, so software which the install.sh is designed to add “probably” can work if all of its dependencies are installed (there are a few situations where it wouldn’t work, e.g., if it requires the GPU to be a discrete GPU, a dGPU, on the PCIe bus…Jetson GPUs are integrated directly to the memory controller as an iGPU).

When it comes to CUDA, typically a prebuilt image designed for a lab already has the correct version. Sometimes software being built has to be told where to find CUDA content. Finding that software is different for compiling versus running. The addition of CUDA to the Jetson (assuming it isn’t just a case of needing to find CUDA, and that it is actually missing) has to be through JetPack/SDK Manager. In this latter case the Jetson is not in recovery mode, but is instead fully booted. If you did flash the Jetson, then as flash finishes the Jetson automatically reboots anyway (at which point you should not have the jumper in place for recovery mode…you want it to boot normally when it reboots). JetPack/SDKM then uses ssh to your admin account of the Jetson to install the selected content (e.g., CUDA). If you install any other version of CUDA it will not work as expected; the CUDA from JetPack understands the iGPU of a Jetson; other installation methods will be expecting a dGPU which will break the iGPU software.

A big question often exists as to how to apply the SD card image to the SD card itself. Using Rufus to create a bootable USB drive won’t work. Options exist:

  • Copy content into the original SD card (e.g., while booted and copying from either a device or over ssh).
  • Flash the QSPI to use a USB boot device (in which case the Rufus software will probably still not work as expected). You would be using Jetson software to a USB stick instead of something created by Rufus. Btw, I don’t know much about Rufus, but what one calls a “bootable image” almost never applies to a Jetson which instead uses an initrd image as an adapter to change from SD card boot to some other boot (even NVMe would need the initrd flash method).

Someone else may need to help if it is truly an issue of the AI lab software. However, from the above, can you give a specific failure you need to get around? Details matter, e.g., if you are talking about “/usr/local/cuda-8.0”, then you will probably need to show a compile command, including logs of the compile, which fail (not just a short snippet of log; you can put logs in a .txt file and attach it to the forum).

I will emphasize that some code expects to detect a dGPU via PCIe commands or nvidia-smi. In L4T R36.x there is a a very minimal nvidia-smi, and a lot of software will break when trying to build it for an iGPU. Code from one of NVIDIA’s AI labs should work with an iGPU if it is intended for a Jetson. Code intended for a desktop GPU (dGPU) will typically fail on a Jetson even if it is from NVIDIA. It is very important to know if the code you are looking at is specifically for a Jetson.

Does the above help? If not, then someone with AI lab knowledge can probably help. I’m just trying to get this to where it is possible to boot with the intended software such that AI labs can work. If you have an example of a compile failing to find CUDA or something else, and you have a detailed log, then I can probably tell you how to fix that.