TLT 2.0 models with Deepstream 5.1

Will the old TLT models work with the deepstream 5.1 SDK? I am currently trying to integrate my old TLT models with the new Deepstream SDK. The engine generation keeps failing and I believe it revolves around the issue of the output blob being NMS in Deepstream 5.1 while TLT 2.0 generates a 3 Output blob, proposal;dense_class_td/Softmax;dense_regress_td/BiasAdd

For your old tlt model which is trained via TLT 2.0, please still use the old output blob name.
proposal;dense_class_td/Softmax;dense_regress_td/BiasAdd
Then it can work with DS5.1.

When I run that with my tlt models I get the following error callback stream:

Mismatch in the number of output buffers.Expected 2 output buffers, detected in the network :3
0:00:15.799994216 14877 0x25ca680 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:724> [UID = 1]: Failed to parse bboxes using custom parse function
Segmentation fault

See Deploying to Deepstream — Transfer Learning Toolkit 2.0 documentation

output-blob-names=<output_blob_names> e.g.:
dense_class_td/Softmax,dense_regress_td/BiasAdd, proposal

Same error is thrown. Here is the entire config:

[property]
gpu-id=0
net-scale-factor=1.0
offsets=112.5;112.5;112.5
model-color-format=1
labelfile-path=/home/quinn/PycharmProjects/ZE/Deepstream/data/configs/cellphone_labels_config.txt
tlt-encoded-model=/home/quinn/PycharmProjects/ZE/Deepstream/data/models/T_59_cellphone_e_7.etlt
tlt-model-key=tlt
#tlt-model-key=bmV2bWNuaXZsdG8xNDB1cnYwbDdmbWczOGc6MThkNTY2NWQtZjAyOC00NDRjLTljMWItNDM2NjAwYzM0Njcy
uff-input-dims=3;540;960;0
uff-input-blob-name=input_image

0=FP32, 1=INT8, 2=FP16 mode

process-mode=1
network-mode=0
num-detected-classes=2
interval=5
batch-size=1
gie-unique-id=1

0=Group Rectangles, 1=DBSCAN, 2=NMS, 3 = None(No clustering)

cluster-mode=0
is-classifier=0
#network-type=0
output-blob-names=dense_class_td/Softmax;dense_regress_td/BiasAdd
parse-bbox-func-name=NvDsInferParseCustomNMSTLT
custom-lib-path=/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_tlt_apps/post_processor/libnvds_infercustomparser_tlt.so
workspace-size=5000
#classifier-threshold=0.4

[class-attrs-all]
pre-cluster-threshold=0.02
post-cluster-threshold=0.02
threshold=0.04
nms-iou-threshold=0.02
#detected-max-h=50
#detected-max-w=50
#detected-min-h=50
#detected-min-w=20
#roi-top-offset=500
#eps=0.02
#group-threshold=1

Could you refer to deepstream_tlt_apps/pgie_frcnn_tlt_config.txt at release/tlt2.0.1 · NVIDIA-AI-IOT/deepstream_tlt_apps · GitHub?

output-blob-names=dense_regress_td/BiasAdd;dense_class_td/Softmax;proposal
parse-bbox-func-name=NvDsInferParseCustomFrcnnTLT

And please double check other parameters.

For TLT2.0 deployment in deepstream, please refer to Deploying to Deepstream — Transfer Learning Toolkit 2.0 documentation