deepstream4.0 can't run .etlt model file

when I run a .etlt model file,it occur some error.
I run the commandline:
deepstream-app -c deepstream_app_config_ssd.txt
error like:
Using winsys: x11
Creating LL OSD context new
0:00:01.212971323 22530 0x1c26ca00 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:02.349060963 22530 0x1c26ca00 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): UffParser: Could not read buffer.
NvDsInferCudaEngineGetFromTltModel: Failed to parse UFF model
0:00:02.361163908 22530 0x1c26ca00 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:generateTRTModel(): Failed to create network using custom network creation function
0:00:02.361284196 22530 0x1c26ca00 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Failed to create engine from model files
0:00:02.361960128 22530 0x1c26ca00 WARN nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary_gie_classifier> error: Failed to create NvDsInferContext instance
0:00:02.362010624 22530 0x1c26ca00 WARN nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary_gie_classifier> error: Config file path: /home/nvidia/deepstream_sdk_v4.0_jetson/sources/objectDetector_SSD/config_infer_primary_ssd_change_tlt.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
** ERROR: main:651: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie_classifier: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(692): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier:
Config file path: /home/nvidia/deepstream_sdk_v4.0_jetson/sources/objectDetector_SSD/config_infer_primary_ssd_change_tlt.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
App run failed

The content of deepstream_app_config_ssd.txt is:
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1
gie-kitti-output-dir=streamscl

[tiled-display]
enable=0
rows=1
columns=1
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
num-sources=1
uri=file://…/…/samples/streams/sample_1080p_h264.mp4
#uri=rtsp://admin:1111@10.0.0.155:554
gpu-id=0
cudadec-memtype=0

[streammux]
gpu-id=0
batch-size=1
batched-push-timeout=-1

Set muxer output width and height

width=1920
height=1080
nvbuf-memory-type=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0
gpu-id=0

[osd]
enable=1
gpu-id=0
border-width=3
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
batch-size=1
gie-unique-id=1
interval=0
#labelfile-path=ssd_coco_labels.txt
#model-engine-file=sample_ssd_relu6.uff_b1_fp32.engine
#model-engine-file=ssd_mobilenet_v1_coco.uff.1.1.GPU.FP16.engine
#config-file=config_infer_primary_ssd.txt
config-file=config_infer_primary_ssd_change_tlt.txt
nvbuf-memory-type=0

The content of config_infer_primary_ssd_change_tlt.txt is:

[property]
gpu-id=0
net-scale-factor=0.0078431372
offsets=127.5;127.5;127.5
model-color-format=1
#model-engine-file=sample_ssd_relu6.uff_b1_fp32.engine
labelfile-path=ssd_coco_labels_tlt.txt
#labelfile-path=ssd_coco_labels.txt
#uff-file=sample_ssd_relu6.uff
#uff-file=ssd_mobilenet_v1_coco.uff
#uff-input-dims=3;300;300;0
#uff-input-blob-name=Input
tlt-encoded-model=ssd_resnet_20191014.etlt
tlt-model-key=1234
input-dims=3;300;300;0
uff-input-blob-name=Input
batch-size=1

0=FP32, 1=INT8, 2=FP16 mode

network-mode=0
#num-detected-classes=91
num-detected-classes=5
interval=1
gie-unique-id=1
is-classifier=0
output-blob-names=MarkOutput_0
parse-bbox-func-name=NvDsInferParseCustomSSD
custom-lib-path=nvdsinfer_custom_impl_ssd/libnvdsinfer_custom_impl_ssd.so

[class-attrs-all]
threshold=0.5
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

Per class configuration

#[class-attrs-2]
#threshold=0.6
#roi-top-offset=20
#roi-bottom-offset=10
#detected-min-w=40
#detected-min-h=40
#detected-max-w=400
#detected-max-h=800

My hardware is Jetson TX2

Hi,

The error indicates that DeepStream cannot read the model successfully.
Could you try to use this GitHub instead:
https://github.com/NVIDIA-AI-IOT/deepstream_4.x_apps

./deepstream-custom <config_file> <H264_file>

Thanks.

Hello AastaLLL,
I try “./deepstream-custom <config_file> <H264_file>”.but it can’t work and occur same error.Do I need rebuild TensorRT OSS?

Hello AastaLLL,
When I rebuild TensorRT,I can’t find TRT_RELEASE for ARM.

This issue is replied in topic 1064407 instead:
[url]https://devtalk.nvidia.com/default/topic/1064407/transfer-learning-toolkit/how-to-export-model-using-tlt-converter-for-jetson-nano/[/url]