Llamacpp compile failed on Jetson Orin Nano (8GB)

My nano (8g) is flashed with jetpack6.0 (cuda=12.2, gcc=11.4). When I compile the source code of llama.cpp(with cuda) on orin nano, the following error occurs. Has anyone successfully compiled on nano? Does anyone know how to solve this error?

/usr/lib/gcc/aarch64-linux-gnu/11/include/arm_neon.h(38): error: identifier “__Int8x8_t” is undefined
typedef __Int8x8_t int8x8_t;
^
/usr/lib/gcc/aarch64-linux-gnu/11/include/arm_neon.h(39): error: identifier “__Int16x4_t” is undefined
typedef __Int16x4_t int16x4_t;
^
/usr/lib/gcc/aarch64-linux-gnu/11/include/arm_neon.h(40): error: identifier “__Int32x2_t” is undefined
typedef __Int32x2_t int32x2_t;
^
/usr/lib/gcc/aarch64-linux-gnu/11/include/arm_neon.h(41): error: identifier “__Int64x1_t” is undefined
typedef __Int64x1_t int64x1_t;
^
/usr/lib/gcc/aarch64-linux-gnu/11/include/arm_neon.h(42): error: identifier “__Float16x4_t” is undefined
typedef __Float16x4_t float16x4_t;
^
/usr/lib/gcc/aarch64-linux-gnu/11/include/arm_neon.h(43): error: identifier “__Float32x2_t” is undefined
typedef __Float32x2_t float32x2_t;
^
/usr/lib/gcc/aarch64-linux-gnu/11/include/arm_neon.h(44): error: identifier “__Poly8x8_t” is undefined
typedef __Poly8x8_t poly8x8_t;
^
/usr/lib/gcc/aarch64-linux-gnu/11/include/arm_neon.h(45): error: identifier “__Poly16x4_t” is undefined
typedef __Poly16x4_t poly16x4_t;

Error limit reached.
100 errors detected in the compilation of “/home/nano8g-1/llama.cpp/ggml/src/ggml-cuda/ggml-cuda.cu”.
Compilation terminated.
gmake[2]: *** [ggml/src/ggml-cuda/CMakeFiles/ggml-cuda.dir/build.make:314: ggml/src/ggml-cuda/CMakeFiles/ggml-cuda.dir/ggml-cuda.cu.o] Error 4
gmake[1]: *** [CMakeFiles/Makefile2:1760: ggml/src/ggml-cuda/CMakeFiles/ggml-cuda.dir/all] Error 2
gmake: *** [Makefile:146: all] Error 2

Hi,

It looks like you encountered an issue similar to the one in the link below.
Could you give their solution a try?

We also have a container with llama.cpp preinstalled. You don’t need to build it manually.

Thanks.

Thanks, sir!
It is not the problem of Orin nano. I have solved it by updating the CMake to the latest version, although I still don’t know why.

I am facing the same issue while trying to build llama.cpp in the Jetson Orin nano.
The issue is the llama.cpp can not directly install everything for the Cortex A78 CPU. It has an option up to cortex A77.
I am trying to update the CmakeList.txt so that it can work.

@ZhaoWS, when you compiled the llama.cpp (with cuda) source code, what instructions did you follow? Which makefile?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.