Orin Nano Hello AI World Inferencing Not Working in JetPack 6.1

I built the Hello AI World project from source a few days ago on my Orin Nano Dev Kit (purchased from Sparkfun.com) with JetPack6.1 installed via the SD Card method. QSPI bootloader also updated to firmware version 36.4. The Hello AI World project installed without any issues. My system details are summarized in this file: system_info.txt (1018 Bytes). The CSI camera also functions properly.

When I try to run basic image classification, I receive the following error: (full output attached here: imagenet_info.txt (6.4 KB))

jet@sky:~/jetson-inference/build/aarch64/bin$ ./imagenet images/orange_0.jpg 

[TRT]    TensorRT 10.3 does not support legacy caffe models
[TRT]    device GPU, failed to load networks/Googlenet/bvlc_googlenet.caffemodel
[TRT]    failed to load networks/Googlenet/bvlc_googlenet.caffemodel
[TRT]    imageNet -- failed to initialize.

.
Also tried basic object detection, but get the same error message: (full output here: detection_info.txt (7.1 KB))

jet@sky:~/jetson-inference/build/aarch64/bin$ ./detectnet --network=ssd-mobilenet-v2 images/peds_0.jpg images/test/output.jpg

[TRT]    TensorRT 10.3 does not support legacy caffe models
[TRT]    device GPU, failed to load networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
[TRT]    detectNet -- failed to initialize.
detectnet:  failed to load detectNet model

.
.
Tried more ideas…
Both the C++ and Python3 methods fail. I also tried different model networks, but no luck. Reading similar forum threads on this problem, I did try the following network.

./imagenet --model=resnet18-tagging-vocr --topK=0 --threshold=0.25 "images/object_*.jpg" images/test/tagging_%i.jpg

.
It seemed to get past the TensorRT error, but it failed anyway due to a segmentation fault shown below Full output here: resnet18-tagging-voc-info.txt (8.4 KB)

[TRT]    binding to input 0 input_0  binding index:  0
[TRT]    binding to input 0 input_0  dims (b=1 c=3 h=224 w=224) size=602112
[TRT]    binding to output 0 output_0  binding index:  1
[TRT]    binding to output 0 output_0  dims (b=1 c=1 h=20 w=0) size=80
Segmentation fault (core dumped)

.
What am I missing here? Has the Hello AI World project been fully vetted for JetPack6.1?

Hi,

As the log indicates, the latest TensorRT drops the Caffe model support so it won’t work.
For JetPack 6.1, you can start with our Generative AI tutorial:

Thanks

I understand the Caffe model is no longer supported. That’s why I later tried the resnet18-tagging-voc model, which uses the ONNX format, not Caffe. It got get past the original error, but two new problems came up: 1) it could not register the “plugin creator”, and 2) a Segmentation fault error occurred. See messages below: (full output is here: segmentation_fault.txt (8.4 KB))

jet@sky:~/jetson-inference/build/aarch64/bin$ imagenet --model=resnet18-tagging-voc --topK=0 --threshold=0.25 "images/object_*.jpg" images/test/tagging_%i.jpg
...
[TRT]    Could not register plugin creator -  ::FlattenConcat_TRT version 1
...   
[TRT]    binding to input 0 input_0  binding index:  0
[TRT]    binding to input 0 input_0  dims (b=1 c=3 h=224 w=224) size=602112
[TRT]    binding to output 0 output_0  binding index:  1
[TRT]    binding to output 0 output_0  dims (b=1 c=1 h=20 w=0) size=80
Segmentation fault (core dumped)

.
Are there other ONNX based models I can try? Thank you.

Hi

There are several API changes in TensorRT 10 but jetson-inference doesn’t add the support yet.
To deploy it with TensorRT, please try to use trtexec tool instead.

Thanks.

In the meantime, I started from scratch with JP6.0 with L4T 36.3 and firmware 36.3 on a separate Orin Nano devkit. System info as follows: system_info.txt (2.4 KB). I have 2 devkits from Sparkfun.com.

The Hello AI World project works almost flawlessly on JP6.0. No issues with most of the imagenet and detectnet model networks.

New problem: the new object detection networks (TAO) are not working (peoplenet, peoplenet-pruned, dashcamnet, trafficcamnet & facedetect). The key error is this:

./tao-converter: error while loading shared libraries: libcrypto.so.1.1: cannot open shared object file: No such file or directory**

Full Output Here:

jet@sky:~/jetson-inference/build/aarch64/bin$ detectnet --model=peoplenet-pruned pedestrians.mp4 pedestrians_peoplenet.mp4
[gstreamer] initialized gstreamer, version 1.20.3.0
[gstreamer] gstDecoder -- creating decoder for pedestrians.mp4
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NvMMLiteBlockCreate : Block : BlockType = 261 

(detectnet:14340): GStreamer-CRITICAL **: 02:47:08.245: gst_debug_log_valist: assertion 'category != NULL' failed

(detectnet:14340): GStreamer-CRITICAL **: 02:47:08.245: gst_debug_log_valist: assertion 'category != NULL' failed

(detectnet:14340): GStreamer-CRITICAL **: 02:47:08.245: gst_debug_log_valist: assertion 'category != NULL' failed

(detectnet:14340): GStreamer-CRITICAL **: 02:47:08.245: gst_debug_log_valist: assertion 'category != NULL' failed
[gstreamer] gstDecoder -- discovered video resolution: 960x540  (framerate 29.970030 Hz)
[gstreamer] gstDecoder -- discovered video caps:  video/x-h264, stream-format=(string)byte-stream, alignment=(string)au, level=(string)3.1, profile=(string)high, width=(int)960, height=(int)540, framerate=(fraction)30000/1001, pixel-aspect-ratio=(fraction)1/1, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true
[gstreamer] gstDecoder -- pipeline string:
[gstreamer] filesrc location=pedestrians.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder name=decoder enable-max-performance=1 ! video/x-raw(memory:NVMM) ! nvvidconv name=vidconv ! video/x-raw ! appsink name=mysink
[video]  created gstDecoder from file:///home/jet/jetson-inference/build/aarch64/bin/pedestrians.mp4
------------------------------------------------
gstDecoder video options:
------------------------------------------------
  -- URI: file:///home/jet/jetson-inference/build/aarch64/bin/pedestrians.mp4
     - protocol:  file
     - location:  pedestrians.mp4
     - extension: mp4
  -- deviceType: file
  -- ioType:     input
  -- codec:      H264
  -- codecType:  v4l2
  -- width:      960
  -- height:     540
  -- frameRate:  29.97
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
------------------------------------------------
[gstreamer] gstEncoder -- codec not specified, defaulting to H.264
[gstreamer] gstEncoder -- detected board 'NVIDIA Jetson Orin Nano Developer Kit'
[gstreamer] gstEncoder -- hardware encoder not detected, reverting to CPU encoder
[gstreamer] gstEncoder -- pipeline launch string:
[gstreamer] appsrc name=mysource is-live=true do-timestamp=true format=3 ! x264enc name=encoder bitrate=4000 speed-preset=ultrafast tune=zerolatency ! video/x-h264 ! h264parse ! qtmux ! filesink location=pedestrians_peoplenet.mp4 
[video]  created gstEncoder from file:///home/jet/jetson-inference/build/aarch64/bin/pedestrians_peoplenet.mp4
------------------------------------------------
gstEncoder video options:
------------------------------------------------
  -- URI: file:///home/jet/jetson-inference/build/aarch64/bin/pedestrians_peoplenet.mp4
     - protocol:  file
     - location:  pedestrians_peoplenet.mp4
     - extension: mp4
  -- deviceType: file
  -- ioType:     output
  -- codec:      H264
  -- codecType:  cpu
  -- frameRate:  30
  -- bitRate:    4000000
  -- numBuffers: 4
  -- zeroCopy:   true
------------------------------------------------
[OpenGL] glDisplay -- X screen 0 resolution:  3840x2160
[OpenGL] glDisplay -- X window resolution:    3840x2160
[OpenGL] glDisplay -- display device initialized (3840x2160)
[video]  created glDisplay from display://0
------------------------------------------------
glDisplay video options:
------------------------------------------------
  -- URI: display://0
     - protocol:  display
     - location:  0
  -- deviceType: display
  -- ioType:     output
  -- width:      3840
  -- height:     2160
  -- frameRate:  0
  -- numBuffers: 4
  -- zeroCopy:   true
------------------------------------------------
[TRT]    running model command:  tao-model-downloader.sh peoplenet_pruned_quantized_v2.3.2
ARCH:  aarch64
reading L4T version from /etc/nv_tegra_release
L4T BSP Version:  L4T R36.3.0
[TRT]    downloading peoplenet_pruned_quantized_v2.3.2
resnet34_peoplenet_pruned_int8.etlt                           100%[==============================================================================================================================================>]   8.53M  25.9MB/s    in 0.3s    
resnet34_peoplenet_pruned_int8.txt                            100%[==============================================================================================================================================>]   9.20K  --.-KB/s    in 0.001s  
labels.txt                                                    100%[==============================================================================================================================================>]      17  --.-KB/s    in 0s      
colors.txt                                                    100%[==============================================================================================================================================>]      27  --.-KB/s    in 0s      
[TRT]    downloading tao-converter from https://api.ngc.nvidia.com/v2/resources/nvidia/tao/tao-converter/versions/v3.22.05_trt8.4_aarch64/files/tao-converter
tao-converter                                                 100%[==============================================================================================================================================>] 128.62K  --.-KB/s    in 0.02s   
detectNet -- converting TAO model to TensorRT engine:
          -- input          resnet34_peoplenet_pruned_int8.etlt
          -- output         resnet34_peoplenet_pruned_int8.etlt.engine
          -- calibration    resnet34_peoplenet_pruned_int8.txt
          -- encryption_key tlt_encode
          -- input_dims     3,544,960
          -- output_layers  output_bbox/BiasAdd,output_cov/Sigmoid
          -- max_batch_size 1
          -- workspace_size 4294967296
          -- precision      int8
./tao-converter: error while loading shared libraries: libcrypto.so.1.1: cannot open shared object file: No such file or directory
[TRT]    failed to convert model 'resnet34_peoplenet_pruned_int8.etlt' to TensorRT...
[TRT]    failed to download model after 2 retries
[TRT]    if this error keeps occuring, see here for a mirror to download the models from:
[TRT]       https://github.com/dusty-nv/jetson-inference/releases
[TRT]    failed to download built-in detection model 'peoplenet-pruned'
detectnet:  failed to load detectNet model

.
It seems the network was downloaded properly, including the tao-converter. Here are the downloaded contents:

et@sky:~/jetson-inference/data/networks/peoplenet_pruned_quantized_v2.3.2$ ll
total 8900
drwxrwxr-x  2 jet jet    4096 Dec  6 02:47 ./
drwxrwxr-x 22 jet jet    4096 Dec  6 02:47 ../
-rw-rw-r--  1 jet jet      27 Dec  6 02:47 colors.txt
-rw-rw-r--  1 jet jet      17 Dec  9  2022 labels.txt
-rw-rw-r--  1 jet jet 8948092 Dec  9  2022 resnet34_peoplenet_pruned_int8.etlt
-rw-rw-r--  1 jet jet    9416 Dec  9  2022 resnet34_peoplenet_pruned_int8.txt
-rwxrwxr-x  1 jet jet  131712 May 26  2022 tao-converter*

.
What am I missing here?

Hi,

Please file your new issue to the TAO forum directly.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.