TAO-model download doesn't work for jetson nano 2GB error while loading shared libraries: libnvmedia_tensor.so: cannot open shared object file

Tao model download not working for jetson nano 2GB.
I tried to run “tao-model-downloader.sh” provided in jetson inference example.
Following is the error :-ARCH: aarch64
reading L4T version from /etc/nv_tegra_release
L4T BSP Version: L4T R32.5.2
[TRT] downloading trafficcamnet_pruned_v1.0.3
mkdir: cannot create directory ‘/usr/local/bin/networks/trafficcamnet_pruned_v1.0.3’: File exists
resnet18_trafficcamnet_pruned.e 100%[=====================================================>] 5.20M 2.34MB/s in 2.2s
trafficcamnet_int8.txt 100%[=====================================================>] 4.75K --.-KB/s in 0.001s
labels.txt 100%[=====================================================>] 29 --.-KB/s in 0s
colors.txt 100%[=====================================================>] 38 --.-KB/s in 0s
[TRT] downloading tao-converter from https://api.ngc.nvidia.com/v2/resources/nvidia/tao/tao-converter/versions/v3.21.11_trt8.0_aarch64/files/tao-converter
tao-converter 100%[=====================================================>] 120.72K 156KB/s in 0.8s
detectNet – converting TAO model to TensorRT engine:
– input resnet18_trafficcamnet_pruned.etlt
– output resnet18_trafficcamnet_pruned.etlt.engine
– calibration trafficcamnet_int8.txt
– encryption_key tlt_encode
– input_dims 3,544,960
– output_layers output_bbox/BiasAdd,output_cov/Sigmoid
– max_batch_size 1
– workspace_size 4294967296
– precision fp16
./tao-converter: error while loading shared libraries: libnvmedia_tensor.so: cannot open shared object file: No such file or directory
[TRT] failed to convert model ‘resnet18_trafficcamnet_pruned.etlt’ to TensorRT…

Can you run below command and share result?
$ dpkg -l |grep cuda

dpkg_op (9.9 KB)
Dumped the output in file as pasting here could be confusing due to long list .

Seems that your jetson nano installed two versions of CUDA and TRT.
If possible, please try to use a jetson nano with fresh installation.

Or, please try to use this version.
wget --content-disposition ‘https://api.ngc.nvidia.com/v2/resources/nvidia/tao/tao-converter/versions/v3.21.08_trt7.1_aarch64/files/tao-converter

Refer to TAO Converter | NVIDIA NGC

I was trying to do it without re-imaging as there are lot of codes in memory card. Is there any way to do uninstall old and install only one version of CUDA & TRT without re-imaging memory card?
I tried wget command following was the output.
wget --content-disposition ‘https://api.ngc.nvidia.com/v2/resources/nvidia/tao/tao-converter/versions/v3.21.08_trt7.1_aarch64/files/tao-converter’
https://api.ngc.nvidia.com/v2/resources/nvidia/tao/tao-converter/versions/v3.21.08_trt7.1_aarch64/files/tao-converter’: Scheme missing.

Hi @dipankar123sil, sorry this appears to be an issue with my tao-model-downloader.sh script in jetson-inference.

Can you try changing this line from jetson-inference/tools/tao-model-downloader.sh (line 148) to the URL for TRT 7.1:

#local url="https://api.ngc.nvidia.com/v2/resources/nvidia/tao/tao-converter/versions/v3.21.11_trt8.0_aarch64/files/tao-converter" 
local url="https://api.ngc.nvidia.com/v2/resources/nvidia/tao/tao-converter/versions/v3.21.08_trt7.1_aarch64/files/tao-converter"

Then re-run the following to re-install the script:

cd jetson-inference/build
cmake ../
make
sudo make install

Then try running detectnet program again.

Hi @dusty_nv ,
I Tried the same as you mentioned above but getting a cmake error. Attaching the files. Just a disclaimer I made the jetson-inference directory inside another folder called new. Hope this isn’t the problem if any script doesn’t have dependency on path as home/jetson-inference
CMakeOutput.log (45.5 KB)
CMakeError.log (2.8 KB)

Hmm…had you previously compiled jetson-inference, or were you running the container before?

TBH your JetPack-L4T version is fairly old and I would probably recommend upgrading to JetPack 4.6 / L4T R32.7 which is the stable version for Nano.

EDIT: if you are having problems with cmake, you can just edit the version of tao-model-downloader.sh found under /usr/local/bin

Yes I had previously compiled jetson inference in the same folder. Shall I delete it and re-run shall it work? Else if there’s no other option I’ll, re-image the sd-card.

I believe it should, but I’m not sure what would make that basic check about pthreads library fail… regardless I think re-flashing your SD card with the latest JetPack 4.6 image is a good idea: https://developer.nvidia.com/jetpack-sdk-463

@dusty_nv .
I tried re-imaging the SC card and the new jetpack version is 4.6.
I also modified the tao-model-downloader.sh line 148 as you mentioned.
And used the command ./tao-model-downloader.sh dashcamnet_pruned_v1.0.3

Following was the error :- (I’m also attaching modified .sh for you reference
tao-model-downloader.sh (9.9 KB)
)
ARCH: aarch64
reading L4T version from /etc/nv_tegra_release
L4T BSP Version: L4T R32.7.1
[TRT] downloading dashcamnet_pruned_v1.0.3
resnet18_dashcam 100%[========>] 6.64M 1.42MB/s in 10s
dashcamnet_int8. 100%[========>] 4.05K --.-KB/s in 0s
labels.txt 100%[========>] 29 --.-KB/s in 0s
colors.txt 100%[========>] 38 --.-KB/s in 0s
[TRT] downloading tao-converter from https://api.ngc.nvidia.com/v2/resources/nvidia/tao/tao-converter/versions/v3.21.08_trt7.1_aarch64/files/tao-converter
tao-converter 100%[========>] 122.20K 162KB/s in 0.8s
detectNet – converting TAO model to TensorRT engine:
– input resnet18_dashcamnet_pruned.etlt
– output resnet18_dashcamnet_pruned.etlt.engine
– calibration dashcamnet_int8.txt
– encryption_key tlt_encode
– input_dims 3,544,960
– output_layers output_bbox/BiasAdd,output_cov/Sigmoid
– max_batch_size 1
– workspace_size 4294967296
– precision fp16
./tao-converter: error while loading shared libraries: libnvinfer.so.7: cannot open shared object file: No such file or directory
[TRT] failed to convert model ‘resnet18_dashcamnet_pruned.etlt’ to TensorRT…

@dipankar123sil I believe you are now running JetPack 4.6, you shouldn’t need to make changes to tao-model-downloader.sh because you now have TensorRT 8. Can you try restoring the original tao-model-downloader.sh script?

cd jetson-inference
git checkout tools/tao-model-downloader.sh

Thanks @dusty_nv , re-imaging woeked.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.