TRT error with deepstream-custom app on Xavier

Hi,

I have followed the steps described to install TensorRT OSS and deepstream-tlt-apps on my Xavier (TRT 7.0, DeepStream 5.0 and CUDA 10.2) and both the libraries were built successfuly. However when I tried to run the example application for frcnn I get the following error:

./deepstream-custom -c pgie_frcnn_tlt_config.txt -i $DS_SRC_PATH/samples/streams/sample_720p.h264

Now playing: pgie_frcnn_tlt_config.txt
Opening in BLOCKING MODE
Opening in BLOCKING MODE
0:00:00.239738191 12739 0x558ded24f0 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: UffParser: UFF buffer empty
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:01.450726826 12739 0x558ded24f0 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 1]: build engine file failed
Bus error (core dumped)

Any suggestions on how I can fix the error ?

-Dilip.

I would also like to add that when I did a git clone of the deepstream_tlt_apps repository the model files were not downloaded. So I manually downloaded the faster_rcnn_resnet10.etlt file using wget from https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/blob/master/models/frcnn/faster_rcnn_resnet10.etlt

The etlt file should be available after git clone. Could you please double check?

For “UFF buffer empty” you mentioned, please try to refer to

Hello Morganh,

Thanks for your reply. There seems to be a LFS bandwidth issue associated with the deepsream-tlt-apps repository. I have git-lfs installed and when I try to clone the repo I see the following error:

git clone GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream
Cloning into ‘deepstream_tlt_apps’…
remote: Enumerating objects: 116, done.
remote: Counting objects: 100% (116/116), done.
remote: Compressing objects: 100% (90/90), done.
remote: Total 116 (delta 35), reused 96 (delta 22), pack-reused 0
Receiving objects: 100% (116/116), 46.99 KiB | 1.27 MiB/s, done.
Resolving deltas: 100% (35/35), done.
Downloading TRT-OSS/Jetson/TRT7.1/libnvinfer_plugin.so.7.1.3_nano_tx2_xavier_nx (9.1 MB)
Error downloading object: TRT-OSS/Jetson/TRT7.1/libnvinfer_plugin.so.7.1.3_nano_tx2_xavier_nx (846691c): Smudge error: Error downloading TRT-OSS/Jetson/TRT7.1/libnvinfer_plugin.so.7.1.3_nano_tx2_xavier_nx (846691ccb0caa16a25d116084da4bf8b9a783130ddd45c16f57cef1f0f7f40e6): batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access.

Errors logged to /home/dilip/Develop/deepstream_tlt_apps/.git/lfs/logs/20200727T113253.191502438.log
Use git lfs logs last to view the log.
error: external filter ‘git-lfs filter-process’ failed
fatal: TRT-OSS/Jetson/TRT7.1/libnvinfer_plugin.so.7.1.3_nano_tx2_xavier_nx: smudge filter lfs failed
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with ‘git status’
and retry the checkout with ‘git checkout -f HEAD’

Even though all the source files are getting cloned the model directories contain the cal.bin files but not the tlt model files.

-Dilip.

I believe the UFF buffer empty is also related to the git lfs cloning issue. Since the models were not getting downloaded via git clone I tried downloading the raw model files using wget and curl but it appears they werent successful as it’s a LFS file and whatever model files that were created were incomplete. I had cloned the repository on my Ubuntu PC couple months ago, when I manually copied over the rtlt model file for faster rcnn the deepstream-custom app loaded the file, created the engine and ran successfully.

-Dilip.

See GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream
Did you run below step?

1. Install git-lfs (git >= 1.8.2)

Need git-lfs to support downloading the >5MB model files.

curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
sudo apt-get install git-lfs
git lfs install

Hello Morgan,

Yes I did install git lfs per the installation steps for deepstream TLT apps repo. It appears there’s a known issue with this repo and GIT LFS, I had posted an issue on the github repository and got a response there acknowledging the issue and that they’re currently working on it.

https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/issues/18

Thanks,
Dilip.