Today I installed Jetpack 6.1 on my official Jetson Orin Nano dev-kit with a brand new SD card image. The dev-kit firmware was also updated to r36.4. All seemed to work okay, although Chromium would not install during the initial setup process (a “snap” issue).
The main issue in this thread is the “Hello AI World” project. I attempted to install it from Source. When I got to the pytorch step, the installation tool came up okay, but an ImportError was encountered (6.3 KB). Furthermore, I ran python3 and checked pytorch, but this error came up:
jet@sky:~$ python3
Python 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/dist-packages/torch/__init__.py", line 235, in <module>
from torch._C import * # noqa: F403
ImportError: libcudnn.so.8: cannot open shared object file: No such file or directory
>>>
.
Then I went on to the next step in the “Hello AI World” installation. I tried to run “make”, but ran into the following error.
jet@sky:~$ cd jetson-inference/build
jet@sky:~/jetson-inference/build$ make
[ 46%] Built target jetson-utils
[ 47%] Building CXX object CMakeFiles/jetson-inference.dir/c/tensorNet.cpp.o
/home/jet/jetson-inference/c/tensorNet.cpp:29:10: fatal error: NvCaffeParser.h: No such file or directory
29 | #include "NvCaffeParser.h"
| ^~~~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [CMakeFiles/jetson-inference.dir/build.make:1583: CMakeFiles/jetson-inference.dir/c/tensorNet.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:394: CMakeFiles/jetson-inference.dir/all] Error 2
make: *** [Makefile:136: all] Error 2
.
None of these errors mentioned in this thread happened in Jetpack 6.0 Production Release. Please advise.
To capture the error logs, I started from scratch on my orin nano dev-kit with a fresh new SD card. During the initial setup, Chromium would not install - it said “Error: Cannot connect to snap store”.
Back to the Hello AI Project, here are the logs following the pytorch installation and the “make” step:
Hey guys - I’m just working through changes to the jetson-inference library for the updated version of TensorRT 10.4 in JetPack 6.1, it should be ready in a couple days. Other components have been built so far like PyTorch, torchvision, and OpenCV+CUDA that you can grab the wheels for awhile from jetson.webredirect.org/jp6/cu126
Pytorch, torchvision, torchaudio, and llama installed successfully with the wheels, this is good for now. When ready, I will still build the 'Hello AI World" project from source. I prefer direct libraries over containers.
I also tried to install tensorflow-addons* which you provided, but the wheel installation did not work because it’s missing these dependencies:
tensorflow<2.16.0,>=2.13.0; extra == "tensorflow"
tensorflow-cpu<2.16.0,>=2.13.0; extra == "tensorflow-cpu"
tensorflow-gpu<2.16.0,>=2.13.0; extra == "tensorflow-gpu"
Do you have a working wheel for tensorflow2.16, especially the gpu-enabled version? This would be very useful.
Thanks @xplanescientist, still working through the tensorflow-addons, tensorflow-text, tensorflow-graphics. @johnnynunez and I had been suffering through those to get JAX to build. In the larger scheme it would seem TF2 development has slowed in lieu of greater migration towards JAX, and those packages are essentially there for backwards compatibility at this point.
this means that this project use cudnn8 and this is very old right now. Jetson has now all sota frameworks and libraries.
CUDA: 12.6
CUDNN: 9.4
Tensorrt: 10.4 (also compatible with 10.5)
Pytorch: 2.4.0
Jax: 0.4.33
Tensorflow: 2.18 (All community from tensorflow is moving to JAX).
Jax is now the top-1 in performance. I’ts more pythonic.
TIP: For use tensorflow, I recommend to wait tensorflow v2.18.0 because it is the version that has my fixes on XLA compiler. So, still is not released
I’m running all containers in jetpack 6.1 without problems.
@dusty_nv is migrating all to jetpack 6.1 but there are a lot of containers, so, you can build it:
CUDA_VERSION=12.6 PYTHON_VERSION jetson-containers build (Any package)
There was an issue reported on GitHub wherein MLC in a r36.3 container was crashing on r36.4, so while I haven’t had time to fully investigate yet, it would seem that some need rebuilt. Let me know if you have been able to run other CUDA 12.2 containers on JetPack 6.1, like PyTorch.
Am still updating jetson-inference for TensorRT 10 because that still gets used for core vision CNNs and is faster than vanilla PyTorch, TF, or JAX.
@dusty_nv, any updates on the “Hello AI World” project r36.4? If it will be ready in the next few days, I’ll wait. Otherwise, I’ll revert back to my r36.3 SD card and reflash the dev-kit to r36.3. I notice that the ubuntu kernel and the firmware must be consistent, otherwise problems arise.
Hi friends,
I’m also eagerly waiting for update such that we can run jetson-inference , yolov8 etc as these need pytorch v 2.5 and torch vision >0.20.1 etc.
regards,
The October 3rd reply indicates this thread is solved, but the “Hello AI World” project does not Build from Source. Can you give us an update? Thank you.
Hi guys, I was able to get it building with commit c038530. However it is not running yet and a bunch of the models are no longer supported in TensorRT. jetson-utils is working though. And I did not update the PyTorch/torchvision installer script yet, however those wheels you can now get from the pip server I linked to before.
Hi, Please let me know the simple information as I’m not much aware about cuda , the SSD (coco) models will be working on jetpack6.1 or not in future. ?
Thanks.
@hemsinghb I think the UUF COCO versions of those may need to be retired, I could not find another converter from UUF->ONNX, although if we went back to the original TF model that may be possible to export to ONNX. The fine-tuned ones from the tutorial are using PyTorch implementation (with ONNX support), so that is not an issue, but the base SSD-Mobilenet model for those came for Pascal VOC (~20 classes) not COCO (~90 classes).
These days Ultralytics YOLO comes with TensorRT integration so that is better out-of-the-box than it was years ago at the time, so that is a factor too. If you’re on JetPack 6.1 that means you’re on Orin also, so attaining realtime performance for detection is less a hurdle. Ideally I will be able to convert the deprecated caffe and UFF checkpoints to ONNX and retain compatability.