In the meantime, I started from scratch with JP6.0 with L4T 36.3 and firmware 36.3 on a separate Orin Nano devkit. System info as follows: system_info.txt (2.4 KB). I have 2 devkits from Sparkfun.com.
The Hello AI World project works almost flawlessly on JP6.0. No issues with most of the imagenet and detectnet model networks.
New problem: the new object detection networks (TAO) are not working (peoplenet, peoplenet-pruned, dashcamnet, trafficcamnet & facedetect). The key error is this:
./tao-converter: error while loading shared libraries: libcrypto.so.1.1: cannot open shared object file: No such file or directory**
Full Output Here:
jet@sky:~/jetson-inference/build/aarch64/bin$ detectnet --model=peoplenet-pruned pedestrians.mp4 pedestrians_peoplenet.mp4
[gstreamer] initialized gstreamer, version 1.20.3.0
[gstreamer] gstDecoder -- creating decoder for pedestrians.mp4
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
(detectnet:14340): GStreamer-CRITICAL **: 02:47:08.245: gst_debug_log_valist: assertion 'category != NULL' failed
(detectnet:14340): GStreamer-CRITICAL **: 02:47:08.245: gst_debug_log_valist: assertion 'category != NULL' failed
(detectnet:14340): GStreamer-CRITICAL **: 02:47:08.245: gst_debug_log_valist: assertion 'category != NULL' failed
(detectnet:14340): GStreamer-CRITICAL **: 02:47:08.245: gst_debug_log_valist: assertion 'category != NULL' failed
[gstreamer] gstDecoder -- discovered video resolution: 960x540 (framerate 29.970030 Hz)
[gstreamer] gstDecoder -- discovered video caps: video/x-h264, stream-format=(string)byte-stream, alignment=(string)au, level=(string)3.1, profile=(string)high, width=(int)960, height=(int)540, framerate=(fraction)30000/1001, pixel-aspect-ratio=(fraction)1/1, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true
[gstreamer] gstDecoder -- pipeline string:
[gstreamer] filesrc location=pedestrians.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder name=decoder enable-max-performance=1 ! video/x-raw(memory:NVMM) ! nvvidconv name=vidconv ! video/x-raw ! appsink name=mysink
[video] created gstDecoder from file:///home/jet/jetson-inference/build/aarch64/bin/pedestrians.mp4
------------------------------------------------
gstDecoder video options:
------------------------------------------------
-- URI: file:///home/jet/jetson-inference/build/aarch64/bin/pedestrians.mp4
- protocol: file
- location: pedestrians.mp4
- extension: mp4
-- deviceType: file
-- ioType: input
-- codec: H264
-- codecType: v4l2
-- width: 960
-- height: 540
-- frameRate: 29.97
-- numBuffers: 4
-- zeroCopy: true
-- flipMethod: none
-- loop: 0
------------------------------------------------
[gstreamer] gstEncoder -- codec not specified, defaulting to H.264
[gstreamer] gstEncoder -- detected board 'NVIDIA Jetson Orin Nano Developer Kit'
[gstreamer] gstEncoder -- hardware encoder not detected, reverting to CPU encoder
[gstreamer] gstEncoder -- pipeline launch string:
[gstreamer] appsrc name=mysource is-live=true do-timestamp=true format=3 ! x264enc name=encoder bitrate=4000 speed-preset=ultrafast tune=zerolatency ! video/x-h264 ! h264parse ! qtmux ! filesink location=pedestrians_peoplenet.mp4
[video] created gstEncoder from file:///home/jet/jetson-inference/build/aarch64/bin/pedestrians_peoplenet.mp4
------------------------------------------------
gstEncoder video options:
------------------------------------------------
-- URI: file:///home/jet/jetson-inference/build/aarch64/bin/pedestrians_peoplenet.mp4
- protocol: file
- location: pedestrians_peoplenet.mp4
- extension: mp4
-- deviceType: file
-- ioType: output
-- codec: H264
-- codecType: cpu
-- frameRate: 30
-- bitRate: 4000000
-- numBuffers: 4
-- zeroCopy: true
------------------------------------------------
[OpenGL] glDisplay -- X screen 0 resolution: 3840x2160
[OpenGL] glDisplay -- X window resolution: 3840x2160
[OpenGL] glDisplay -- display device initialized (3840x2160)
[video] created glDisplay from display://0
------------------------------------------------
glDisplay video options:
------------------------------------------------
-- URI: display://0
- protocol: display
- location: 0
-- deviceType: display
-- ioType: output
-- width: 3840
-- height: 2160
-- frameRate: 0
-- numBuffers: 4
-- zeroCopy: true
------------------------------------------------
[TRT] running model command: tao-model-downloader.sh peoplenet_pruned_quantized_v2.3.2
ARCH: aarch64
reading L4T version from /etc/nv_tegra_release
L4T BSP Version: L4T R36.3.0
[TRT] downloading peoplenet_pruned_quantized_v2.3.2
resnet34_peoplenet_pruned_int8.etlt 100%[==============================================================================================================================================>] 8.53M 25.9MB/s in 0.3s
resnet34_peoplenet_pruned_int8.txt 100%[==============================================================================================================================================>] 9.20K --.-KB/s in 0.001s
labels.txt 100%[==============================================================================================================================================>] 17 --.-KB/s in 0s
colors.txt 100%[==============================================================================================================================================>] 27 --.-KB/s in 0s
[TRT] downloading tao-converter from https://api.ngc.nvidia.com/v2/resources/nvidia/tao/tao-converter/versions/v3.22.05_trt8.4_aarch64/files/tao-converter
tao-converter 100%[==============================================================================================================================================>] 128.62K --.-KB/s in 0.02s
detectNet -- converting TAO model to TensorRT engine:
-- input resnet34_peoplenet_pruned_int8.etlt
-- output resnet34_peoplenet_pruned_int8.etlt.engine
-- calibration resnet34_peoplenet_pruned_int8.txt
-- encryption_key tlt_encode
-- input_dims 3,544,960
-- output_layers output_bbox/BiasAdd,output_cov/Sigmoid
-- max_batch_size 1
-- workspace_size 4294967296
-- precision int8
./tao-converter: error while loading shared libraries: libcrypto.so.1.1: cannot open shared object file: No such file or directory
[TRT] failed to convert model 'resnet34_peoplenet_pruned_int8.etlt' to TensorRT...
[TRT] failed to download model after 2 retries
[TRT] if this error keeps occuring, see here for a mirror to download the models from:
[TRT] https://github.com/dusty-nv/jetson-inference/releases
[TRT] failed to download built-in detection model 'peoplenet-pruned'
detectnet: failed to load detectNet model
.
It seems the network was downloaded properly, including the tao-converter. Here are the downloaded contents:
et@sky:~/jetson-inference/data/networks/peoplenet_pruned_quantized_v2.3.2$ ll
total 8900
drwxrwxr-x 2 jet jet 4096 Dec 6 02:47 ./
drwxrwxr-x 22 jet jet 4096 Dec 6 02:47 ../
-rw-rw-r-- 1 jet jet 27 Dec 6 02:47 colors.txt
-rw-rw-r-- 1 jet jet 17 Dec 9 2022 labels.txt
-rw-rw-r-- 1 jet jet 8948092 Dec 9 2022 resnet34_peoplenet_pruned_int8.etlt
-rw-rw-r-- 1 jet jet 9416 Dec 9 2022 resnet34_peoplenet_pruned_int8.txt
-rwxrwxr-x 1 jet jet 131712 May 26 2022 tao-converter*
.
What am I missing here?