Some issues encountered. Firstly, JetPack 6.2 installed properly via the lastest SD card image. The firmware updated correctly on its own after a few reboots. The MAXN power setting is available. All Great. Summary of my system is here: system_info.txt (2.0 KB)
Major Issues encountered withHello AI World. I attempted to build the project from source.
Problem #1: Pytorch and torchvision did not install properly. Error message from python as follows:
sky@jet:~$ python3
Python 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/dist-packages/torch/__init__.py", line 235, in <module>
from torch._C import * # noqa: F403
ImportError: libcudnn.so.8: cannot open shared object file: No such file or directory
>>> import torchvision
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'torchvision'
>>>
Problem #2: the C++ and python Image Classification tools, Object Detection, Posent, are not working properly. I understand that the Caffe-based models are not working because the latest version of TensorRT does not support Caffe model. I did try inferencing with ONNX networks (“resnet18-tagging-voc” and posenet networks), but they all failed due to the same “Segmentation fault (core dumped)” error. Full jetson output is here: image_classification_errors.txt (950.3 KB)
Questions: (1) Will the Hello AI World project be updated for JetPack 6.2? (2) When will ONNX network versions be available for image classification and object detection? (3) Currently, Segmentation-fault error is occurring with ONNX models. TensorRT10.x is supposed to support ONNX. When will this be reconciled?
The Hello AI World project is an essential potent learning tool ever since my first Jetson Nano up to the Orin Nano devkit with JetPack 6.0. It is critical that the project work properly as it has done in the recent past. Please prioritize this. I’m particularly interested in the C++ tools. Thank you.
I need “Hello AI World” to work on JP6.2 to get the latest tools and MAXN Super. I did make good progress on JP6.2!
First, Torch and Torchvision successfully installed using pip wheels from Yolo Prep. See section “YoloV11 using Pytorch”, go to step #2, download and install the wheels. Afterwards. I also had to run the following commands in python3 to resolve a numpy compatibility issue:
pip uninstall numpy
pip install "numpy<2"
Secondly, the segmentation fault error occurring with ONNX networks and TensorRT 10+ has been repaired, thanks to @ligaz. Although Caffe networks are no longer supported in TRT10+, ONNX models should be supported. It’s a matter of updating the “tensorNet.cpp” file in your “~/jetson-inference/c” directory with this one: tensorNet.cpp. Then rebuild the project starting with the steps under “Configuring with CMake”.
Again, only the ONNX networks are supported. So the “imagenet” classification code works greate with the “resnet18-tagging-voc” model, but don’t try the other models because they are Caffe-based. Also, “posenet” works great with all the provided models because they are ONNX-based.
Unfortunately, none of the “object detection” models work - all Caffe-based. Anyone have ONNX versions compatible with the “Hello AI World” “detectnet” project?
For detection on JetPack 6.2, you can find below for the YOLO sample (not detecenet based):
If you want to run jetson-inference on JetPack 6.2, you may try it with JetPack 6.0 container on the r36.4.3 BSP.
Ideally, you should have the super mode configure and JetPack 6.0/TensorRT 8.6 software support.