How to export model using tlt-converter for Jetson Nano

Yes, you’re right.
The original libnvinfer_plugin.so* locates at /usr/lib/aarch64-linux-gnu

And we’re talking about replacing libnvinfer_plugin.so* all the time. Please do not consider libnvinfer.so*.
See https://devtalk.nvidia.com/default/topic/1064407/transfer-learning-toolkit/how-to-export-model-using-tlt-converter-for-jetson-nano/post/5399311/#5399311.
The new libnvinfer_plugin.so* lib will be avaiable at pwd/out folder.
Replace the original libnvinfer_plugin.so* with the new compiled libnvinfer_plugin.so*

Please backup the original libnvinfer_plugin.so* in case you remove it by mistake.

Hi Morgan,
Thanks for your quick reply .

so what about /usr/local/cuda-10.0/lib64/ location,In this location libnvinfer.so.* and libnvinfer_plugin.so.* located or not?
In my case

ll /usr/local/cuda-10.0/lib64/libnvinfer*

lrwxrwxrwx 1 root root 26 Nov 7 20:45 /usr/local/cuda-10.0/lib64/libnvinfer_plugin.so -> libnvinfer_plugin.so.5.1.5
lrwxrwxrwx 1 root root 28 Nov 7 20:45 /usr/local/cuda-10.0/lib64/libnvinfer_plugin.so.5.1.5 -> libnvinfer_plugin.so.5.1.5.0
-rw-r–r-- 1 root root 2619200 Nov 7 20:45 /usr/local/cuda-10.0/lib64/libnvinfer_plugin.so.5.1.5.0
lrwxrwxrwx 1 root root 19 Nov 7 21:58 /usr/local/cuda-10.0/lib64/libnvinfer.so.5 -> libnvinfer.so.5.1.5*
-rwxr-xr-x 1 root root 141788184 Nov 7 21:58 /usr/local/cuda-10.0/lib64/libnvinfer.so.5.1.5*

so i don’t know how libnvinder.so* and libnvinfer_plugin.so* its availble at location…

waiting for your reply…

Thanks,
Deep

By default, after installed via Jetpack, libnvinfer_plugin.so* are only available at /usr/lib/aarch64-linux-gnu/

Can i download only TRT with jetpack bcz through jetpack i need to downlaod other stuff like ooencv compulsory…

Thanks.

Sorry, ignore my comment.

Hi,

somehow i copied libnvinfer.so.5.1.6 to /usr/lib/aarch64-linux-gnu/ but when i run sample app like,
./deepstream-test1-app …/…/…/…/…/samples/streams/sample_720p.h264
i got following error.

One element could not be created. Exiting.

Hi Sdeep,
Please file a new topic to describe your issue and paste the full log along with command and config file.
Thanks.

Hi Morgan,

how should be /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so* look like?

I have

ls -l /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so*
lrwxrwxrwx 1 root root 26 Nov 7 19:45 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so -> libnvinfer_plugin.so.5.1.5
lrwxrwxrwx 1 root root 26 Jun 5 01:22 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.5 -> libnvinfer_plugin.so.5.1.6
lrwxrwxrwx 1 root root 28 Nov 7 19:45 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.5.1.5 -> libnvinfer_plugin.so.5.1.5.0
-rw-r–r-- 1 root root 2619200 Nov 7 19:45 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.5.1.5.0
-rw-r–r-- 1 root root 2619200 Nov 8 19:43 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.5.1.6

is it correct?

I have copied sudo cp out/libnvinfer_plugin.so.5.1.5 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.5.1.6 .

Below is an example how to replace libnvinfer_plugin.so.

$ md5sum /home/nvidia/trt-oss/TensorRT/build/out/libnvinfer_plugin.so.5.1.5
237f90d6d666fa8c6e7bc25f47236ad0 /home/nvidia/trt-oss/TensorRT/build/out/libnvinfer_plugin.so.5.1.5

$  sudo cp /home/nvidia/trt-oss/TensorRT/build/out/libnvinfer_plugin.so.5.1.5    /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.5.1.6


$ md5sum /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.5.1.6
237f90d6d666fa8c6e7bc25f47236ad0 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.5.1.6

$ ll /usr/lib/aarch64-linux-gnu/libnvinfer_plugin*

lrwxrwxrwx 1 root root 26 6月 5 03:52 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so -> libnvinfer_plugin.so.5.1.6
lrwxrwxrwx 1 root root 26 6月 5 03:52 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.5 -> libnvinfer_plugin.so.5.1.6
lrwxrwxrwx 1 root root 26 9月 25 18:15 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.5.1.5 -> libnvinfer_plugin.so.5.1.6
-rw-r--r-- 1 root root 2619200 9月 19 17:58 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.5.1.6

Thanks morgan for quick reply,
but still i m facing isse…

./deepstream-custom pgie_frcnn_uff_config.txt sample_720p.h264
Now playing: pgie_frcnn_uff_config.txt

Using winsys: x11
Opening in BLOCKING MODE
Creating LL OSD context new
0:00:00.621955738 26108 0x55c9056640 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger: NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:00.940023898 26108 0x55c9056640 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger: NvDsInferContext[UID 1]:log(): UffParser: Could not read buffer.
NvDsInferCudaEngineGetFromTltModel: Failed to parse UFF model
0:00:00.940728252 26108 0x55c9056640 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger: NvDsInferContext[UID 1]:generateTRTModel(): Failed to create network using custom network creation function
0:00:00.940823072 26108 0x55c9056640 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger: NvDsInferContext[UID 1]:initialize(): Failed to create engine from model files
0:00:00.941257173 26108 0x55c9056640 WARN nvinfer gstnvinfer.cpp:692:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:00.941284726 26108 0x55c9056640 WARN nvinfer gstnvinfer.cpp:692:gst_nvinfer_start: error: Config file path: pgie_frcnn_uff_config.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
Running…
ERROR from element primary-nvinference-engine: Failed to create NvDsInferContext instance
Error details: gstnvinfer.cpp(692): gst_nvinfer_start (): /GstPipeline:ds-custom-pipeline/GstNvInfer:primary-nvinference-engine:
Config file path: pgie_frcnn_uff_config.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
Returned, stopping playback
Deleting pipeline

Thanks,
Deep

Hi sdeep,
This is another topic, please create a new topic and then paste the full log along with your pgie_frcnn_uff_config.txt too.

Thanks a lot!

Sure morgan,
please find at https://devtalk.nvidia.com/default/topic/1066361/deepstream-sdk/0x55a9170640-error-nvinfer-gstnvinfer-cpp-511-gst_nvinfer_logger-lt-primary-nvinference-engine-gt-nvdsinfercontext-uid-1-log-uffparser-could-not-read-buffer-/ Link.

Did you solve this problem? I met the same error.

@xiaomaozhou26
Your issue is solved at TLT-deepstream sample app error

@Morganh
Hi, I want to run tlt-converter on jetson nx. I followed:

and after doing all setup I am getting :
sudo: ./tlt-converter: command not found

Please help on this.

@sanjay.deo
See https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/text/deploying_to_deepstream.html#id3

For the Jetson platform, the tlt-converter is available to download in the dev zone

Exactly I followed the same.

Then I got the error.

Please

$ sudo chmod +x tlt-converter

1 Like

Thanks @Morganh it worked 🖖