Failling in building sample from TLT-DEEPSTREAM

• Hardware Platform (Jetson / GPU): Jetson AGX XAVIER**
• DeepStream Version: 5.0**
• JetPack Version (valid for Jetson only)**
• TensorRT Version:7.0.0**
root@8e2915f5a1d2:/home/deepstream_tlt_apps-master# ./deepstream-custom -c pgie_frcnn_tlt_config.txt -i $DS_SRC_PATH/samples/streams/sample_720p.h264
Warning: ‘input-dims’ parameter has been deprecated. Use ‘infer-dims’ instead.
Now playing: pgie_frcnn_tlt_config.txt
0:00:00.323130999 86751 0x556e52685d90 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: UffParser: Could not read buffer.
parseModel: Failed to parse UFF model
ERROR: tlt/tlt_decode.cpp:274 failed to build network since parsing model errors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API
0:00:00.436360550 86751 0x556e52685d90 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
Segmentation fault (core dumped)

Could you please share pgie_frcnn_tlt_config.txt ?

Copyright (c) 2018-2020 NVIDIA Corporation. All rights reserved.

NVIDIA Corporation and its licensors retain all intellectual property

and proprietary rights in and to this software, related documentation

and any modifications thereto. Any use, reproduction, disclosure or

distribution of this software and related documentation without an express

license agreement from NVIDIA Corporation is strictly prohibited.

Following properties are mandatory when engine files are not specified:

int8-calib-file(Only in INT8)

Caffemodel mandatory properties: model-file, proto-file, output-blob-names

UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names

ONNX: onnx-file

Mandatory properties for detectors:

num-detected-classes

Optional properties for detectors:

enable-dbscan(Default=false), interval(Primary mode only, Default=0)

custom-lib-path,

parse-bbox-func-name

Mandatory properties for classifiers:

classifier-threshold, is-classifier

Optional properties for classifiers:

classifier-async-mode(Secondary mode only, Default=false)

Optional properties in secondary mode:

operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),

input-object-min-width, input-object-min-height, input-object-max-width,

input-object-max-height

Following properties are always recommended:

batch-size(Default=1)

Other optional properties:

net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),

model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,

mean-file, gie-unique-id(Default=0), offsets, gie-mode (Default=1 i.e. primary),

custom-lib-path, network-mode(Default=0 i.e FP32)

The values in the config file are overridden by values set through GObject

properties.

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=./nvdsinfer_customparser_frcnn_tlt/frcnn_labels.txt
tlt-encoded-model=./frcnn_kitti_resnet50_retrain_fp16.etlt
tlt-model-key=tlt
uff-input-dims=3;272;480;0
uff-input-blob-name=input_image
batch-size=1

0=FP32, 1=INT8, 2=FP16 mode

network-mode=2
num-detected-classes=5
interval=0
gie-unique-id=1
is-classifier=0
#network-type=0
output-blob-names=dense_regress_td/BiasAdd;dense_class_td/Softmax;proposal
parse-bbox-func-name=NvDsInferParseCustomFrcnnTLT
custom-lib-path=./nvdsinfer_customparser_frcnn_tlt/libnvds_infercustomparser_frcnn_tlt.so

[class-attrs-all]
pre-cluster-threshold=0.6
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

tlt-model-key=tlt

Please check if your ngc key is correct.

I checked it. No problem

  1. Please double check if your “frcnn_kitti_resnet50_retrain_fp16.etlt” is trained with the correct “tlt-model-key”.
  2. If there is still an issue, please try to narrow down with below config file
    https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/blob/master/pgie_frcnn_tlt_config.txt
    along with model files in deepstream_tao_apps/models/frcnn at release/tlt2.0 · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub

tlt-model-key=my ngc key
but the same error.

So, please try to narrow down as mentioned above.
Use the github released models and its config file.

the same error.
root@8e2915f5a1d2:/home/deepstream_tlt_apps-release-tlt2.0# ./deepstream-custom -c pgie_frcnn_tlt_config.txt -i $DS_SRC_PATH/samples/streams/sample_720p.h264
Warning: ‘input-dims’ parameter has been deprecated. Use ‘infer-dims’ instead.
Now playing: pgie_frcnn_tlt_config.txt
0:00:00.329121248 17909 0x561a1ccfa990 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: UffParser: UFF buffer empty
parseModel: Failed to parse UFF model
ERROR: tlt/tlt_decode.cpp:274 failed to build network since parsing model errors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API
0:00:00.372118237 17909 0x561a1ccfa990 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
Segmentation fault (core dumped)

More, please note that in faster-rcnn training, it needs setting ngc api key in the spec.

enc_key: ‘$KEY’

$ sed -i ‘s/$KEY/’“$KEY/g” your_spec.txt

Otherwise, the key not correct.
Please check if you set.

I checked enc_key. No problem

As mentioned, please try to narrow down as mentioned above.
Use the github released models and its config file.

i down project from GitHub - NVIDIA-AI-IOT/deepstream_tao_apps at release/tlt2.0
but the same error.

i down project from GitHub - NVIDIA-AI-IOT/deepstream_tao_apps at release/tlt2.0
but the same error.

That’s not expected. Other end users do not meet your issue when they just download github and run its etlt model along with its config file.
Please double check or re-run the github step by step.