"Hello AI World" Project Errors on Orin Nano JetPack 6.1

Today I installed Jetpack 6.1 on my official Jetson Orin Nano dev-kit with a brand new SD card image. The dev-kit firmware was also updated to r36.4. All seemed to work okay, although Chromium would not install during the initial setup process (a “snap” issue).

The main issue in this thread is the “Hello AI World” project. I attempted to install it from Source. When I got to the pytorch step, the installation tool came up okay, but an ImportError was encountered (6.3 KB). Furthermore, I ran python3 and checked pytorch, but this error came up:

jet@sky:~$ python3
Python 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.10/dist-packages/torch/__init__.py", line 235, in <module>
    from torch._C import *  # noqa: F403
ImportError: libcudnn.so.8: cannot open shared object file: No such file or directory
>>>

.
Then I went on to the next step in the “Hello AI World” installation. I tried to run “make”, but ran into the following error.

jet@sky:~$ cd jetson-inference/build
jet@sky:~/jetson-inference/build$ make
[ 46%] Built target jetson-utils
[ 47%] Building CXX object CMakeFiles/jetson-inference.dir/c/tensorNet.cpp.o
/home/jet/jetson-inference/c/tensorNet.cpp:29:10: fatal error: NvCaffeParser.h: No such file or directory
   29 | #include "NvCaffeParser.h"
      |          ^~~~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [CMakeFiles/jetson-inference.dir/build.make:1583: CMakeFiles/jetson-inference.dir/c/tensorNet.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:394: CMakeFiles/jetson-inference.dir/all] Error 2
make: *** [Makefile:136: all] Error 2

.
None of these errors mentioned in this thread happened in Jetpack 6.0 Production Release. Please advise.

1 Like

same problem on Orin NX as well

To capture the error logs, I started from scratch on my orin nano dev-kit with a fresh new SD card. During the initial setup, Chromium would not install - it said “Error: Cannot connect to snap store”.

Back to the Hello AI Project, here are the logs following the pytorch installation and the “make” step:

pytorch_log.txt (26.4 KB)
make_log.txt (28.4 KB)

I basically received the same errors as the first installation. Please advise.

same on my freshly installed orin nano as well

Hey guys - I’m just working through changes to the jetson-inference library for the updated version of TensorRT 10.4 in JetPack 6.1, it should be ready in a couple days. Other components have been built so far like PyTorch, torchvision, and OpenCV+CUDA that you can grab the wheels for awhile from jetson.webredirect.org/jp6/cu126

Pytorch, torchvision, torchaudio, and llama installed successfully with the wheels, this is good for now. When ready, I will still build the 'Hello AI World" project from source. I prefer direct libraries over containers.

I also tried to install tensorflow-addons* which you provided, but the wheel installation did not work because it’s missing these dependencies:

tensorflow<2.16.0,>=2.13.0; extra == "tensorflow"
tensorflow-cpu<2.16.0,>=2.13.0; extra == "tensorflow-cpu"
tensorflow-gpu<2.16.0,>=2.13.0; extra == "tensorflow-gpu"

Do you have a working wheel for tensorflow2.16, especially the gpu-enabled version? This would be very useful.

Thank you.

Thanks @xplanescientist, still working through the tensorflow-addons, tensorflow-text, tensorflow-graphics. @johnnynuca14 and I had been suffering through those to get JAX to build. In the larger scheme it would seem TF2 development has slowed in lieu of greater migration towards JAX, and those packages are essentially there for backwards compatibility at this point.

1 Like

this means that this project use cudnn8 and this is very old right now. Jetson has now all sota frameworks and libraries.

CUDA: 12.6
CUDNN: 9.4
Tensorrt: 10.4 (also compatible with 10.5)
Pytorch: 2.4.0
Jax: 0.4.33
Tensorflow: 2.18 (All community from tensorflow is moving to JAX).
Jax is now the top-1 in performance. I’ts more pythonic.

TIP: For use tensorflow, I recommend to wait tensorflow v2.18.0 because it is the version that has my fixes on XLA compiler. So, still is not released

I’m running all containers in jetpack 6.1 without problems.

@dusty_nv is migrating all to jetpack 6.1 but there are a lot of containers, so, you can build it:
CUDA_VERSION=12.6 PYTHON_VERSION jetson-containers build (Any package)

There was an issue reported on GitHub wherein MLC in a r36.3 container was crashing on r36.4, so while I haven’t had time to fully investigate yet, it would seem that some need rebuilt. Let me know if you have been able to run other CUDA 12.2 containers on JetPack 6.1, like PyTorch.

Am still updating jetson-inference for TensorRT 10 because that still gets used for core vision CNNs and is faster than vanilla PyTorch, TF, or JAX.

1 Like

@dusty_nv for me it is compiling.
Sometimes, I do

sudo docker system prune -a

to avoid some conflicts.
be careful, this removes all docker images

@dusty_nv, any updates on the “Hello AI World” project r36.4? If it will be ready in the next few days, I’ll wait. Otherwise, I’ll revert back to my r36.3 SD card and reflash the dev-kit to r36.3. I notice that the ubuntu kernel and the firmware must be consistent, otherwise problems arise.

Hi friends,
I’m also eagerly waiting for update such that we can run jetson-inference , yolov8 etc as these need pytorch v 2.5 and torch vision >0.20.1 etc.
regards,

– Hemsingh,