TLT-deepstream sample app error

yes, I flashed my device with Jetpack4.4, which installed DS5, CUDA10.2 and Cudnn8.

According to error log , I think this error is because the TensorRT plugin is not updated.Please double check your step for it.

Which device are you using?

Hi, I am using jetson TX2 and I think I followed the Tensorrt installation steps: https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/tree/master/TRT-OSS/Jetson

I tried to run the tlt-converter, it failed with the same error. This means the TRT is not running correctly.

Please check if you were using the correct GPU_ARCHS when you build the TRT OSS plugin.
|Jetson Platform |GPU_ARCHS|
|TX2 |62 |

In GitHub - NVIDIA/TensorRT: TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.
##########################
After building successfully, libnvinfer_plugin.so* will be generated under pwd /out/.

sudo cp `pwd`/out/libnvinfer_plugin.so.7.m.n  /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.x.y

##########################
After building, I have the following 3 files, but I am not sure I copied them correctly. Could you help me with this step.
libnvinfer_plugin.so libnvinfer_plugin.so.7.0.0 libnvinfer_plugin.so.7.0.0.1

The 3 files are the same.
Refer to Failling in building sample from TLT-DEEPSTREAM - #15 by Morganh

Thanks. I am using Xaiver so I think the GPU_ARCHS is 72 and it’s correct.

I ran the copying command:
sudo cp /home/dewei/TensorRT/build/out/libnvinfer_plugin.so.7.0.0.1 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.0

And I got the following result:

dewei@dewei-desktop:~$ ll /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so*
lrwxrwxrwx 1 root root      26 Jun 10 00:02 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.0.0 -> libnvinfer_plugin.so.7.1.0*
-rwxr-xr-x 1 root root 4652648 Jun 10 00:00 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.0*

Then, I try to ran the sample by
dewei@dewei-desktop:~/Documents/deepstream_tlt_apps$ ./deepstream-custom -c pgie_frcnn_tlt_config.txt -i $DS_SRC_PATH/samples/streams/sample_720p.h264

The error is still there as same:

(deepstream-custom:9671): GStreamer-WARNING **: 00:03:54.015: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so': libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
One element could not be created. Exiting.

Thanks

Please review Failling in building sample from TLT-DEEPSTREAM

Thanks for instruction but I am confused. The error seems still there to run the sample custom app:

(deepstream-custom:9671): GStreamer-WARNING **: 00:03:54.015: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so': libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
One element could not be created. Exiting.

@wdw0908
For your case, I already tell the solution in Failling in building sample from TLT-DEEPSTREAM - #15 by Morganh

Please resume the softlinks as the original, then follow my step and retry.

Yes, I have followed the exact same steps in your solution, but it does not help on my original issue of the error message.

@wdw0908
Please paste your latest result at that topic, thanks. Let’s sync at that link.

$ ll /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so*

Thank you. Morganh. This solved my problem. Cheers

Hi Morgan, I still have trouble running

dewei@dewei-desktop:~/Documents/deepstream_tlt_apps$ ./deepstream-custom -c pgie_ssd_tlt_config.txt -i sample_720p.h264
Now playing: pgie_ssd_tlt_config.txt
Opening in BLOCKING MODE 
Opening in BLOCKING MODE 
0:00:00.215516495 12311   0x5578f942f0 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: UffParser: UFF buffer empty
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:01.364630606 12311   0x5578f942f0 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 1]: build engine file failed
Segmentation fault (core dumped)

I don’t know why UFF buffer is empty. My TensorRT is internally installed with Jetpack in Xavier and I followed the link
deepstream_tao_apps/README.md at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub to set up the environment, as I described it in another topic in TLT forum.

@wdw0908,
You are now running with ssd. From TLT-deepstream sample app error - #4 by wdw0908, you were running frcnn.
So, please paste your pgie_ssd_tlt_config.txt here.

The config is the same as the one in git repo.

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=./nvdsinfer_customparser_ssd_tlt/ssd_labels.txt
tlt-encoded-model=./models/ssd/ssd_resnet18.etlt
tlt-model-key=nvidia_tlt
uff-input-dims=3;544;960;0
uff-input-blob-name=Input
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=4
interval=0
gie-unique-id=1
is-classifier=0
#network-type=0
output-blob-names=NMS
parse-bbox-func-name=NvDsInferParseCustomSSDTLT
custom-lib-path=./nvdsinfer_customparser_ssd_tlt/libnvds_infercustomparser_ssd_tlt.so

[class-attrs-all]
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

@wdw0908
For your new issue, “UFF buffer empty”, please double check the steps mentioned in the github.
Or refer to

Hi, it was solved by reinstalling in git-lfs. The installation in git readme is for amd64 by default. I manually installed it the arm64 version. Thanks.