TLT-deepstream sample app error

Yes, I have followed the exact same steps in your solution, but it does not help on my original issue of the error message.

@wdw0908
Please paste your latest result at that topic, thanks. Let’s sync at that link.

$ ll /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so*

Thank you. Morganh. This solved my problem. Cheers

Hi Morgan, I still have trouble running

dewei@dewei-desktop:~/Documents/deepstream_tlt_apps$ ./deepstream-custom -c pgie_ssd_tlt_config.txt -i sample_720p.h264
Now playing: pgie_ssd_tlt_config.txt
Opening in BLOCKING MODE 
Opening in BLOCKING MODE 
0:00:00.215516495 12311   0x5578f942f0 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: UffParser: UFF buffer empty
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:01.364630606 12311   0x5578f942f0 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 1]: build engine file failed
Segmentation fault (core dumped)

I don’t know why UFF buffer is empty. My TensorRT is internally installed with Jetpack in Xavier and I followed the link
https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/blob/master/README.md to set up the environment, as I described it in another topic in TLT forum.

@wdw0908,
You are now running with ssd. From TLT-deepstream sample app error, you were running frcnn.
So, please paste your pgie_ssd_tlt_config.txt here.

The config is the same as the one in git repo.

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=./nvdsinfer_customparser_ssd_tlt/ssd_labels.txt
tlt-encoded-model=./models/ssd/ssd_resnet18.etlt
tlt-model-key=nvidia_tlt
uff-input-dims=3;544;960;0
uff-input-blob-name=Input
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=4
interval=0
gie-unique-id=1
is-classifier=0
#network-type=0
output-blob-names=NMS
parse-bbox-func-name=NvDsInferParseCustomSSDTLT
custom-lib-path=./nvdsinfer_customparser_ssd_tlt/libnvds_infercustomparser_ssd_tlt.so

[class-attrs-all]
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

@wdw0908
For your new issue, “UFF buffer empty”, please double check the steps mentioned in the github.
Or refer to


Hi, it was solved by reinstalling in git-lfs. The installation in git readme is for amd64 by default. I manually installed it the arm64 version. Thanks.