Deepstream deplyment : Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 1]: build engine file failed

deepstream-custom
DeepStreamSDK 5
CUDA Driver Version: 10.2
CUDA Runtime Version: 10.2
TensorRT Version: 7.0.0.11
cuDNN Version: 7.6.5
NVIDIA GTX 960m

I want to integrate TLT-fasterrcnn-resnet50-model trained for ‘hand’ into deepstream sdk for inference
i get the .etlt file by using tlt-export command in docker.
I have built the TRT-OSS for the cropAndResizePlugin for frcnn to deploy into deepstream from deepstream-tlt-apps,
I can also run the sample for deepstream-custom.
but when i try to use my etlt file in the pgie_frcnn_tlt_config.txt for deepstream-custom i get the following error :

command : “deepstream-custom -c pgie_frcnn_tlt_config.txt -i …/…/streams/sample_720p.h264 -d”

Now playing: pgie_frcnn_tlt_config.txt
0:00:00.352753517 21517 0x563527917d80 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:759 FP16 not supported by platform. Using FP32 mode.
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: Parameter check failed at: …/builder/Network.cpp::addInput::957, condition: isValidDims(dims, hasImplicitBatchDimension())
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: UFFParser: Failed to parseInput for node input_1
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: UffParser: Parser error: input_1: Failed to parse node - Invalid Tensor found at node input_1
parseModel: Failed to parse UFF model
ERROR: tlt/tlt_decode.cpp:274 failed to build network since parsing model errors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API
0:00:00.982060040 21517 0x563527917d80 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 1]: build engine file failed
Segmentation fault

my config file is following : (pgie_frcnn_tlt_config.txt) :

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=./nvdsinfer_customparser_frcnn_tlt/frcnn_labels.txt
#tlt-encoded-model=./models/frcnn/faster_rcnn_resnet10.etlt
tlt-encoded-model=./models/frcnn/frcnn_kitti_epoch8_fp16.etlt
#tlt-model-key=tlt
tlt-model-key=c2NuOGlxOGlxMmhvbW05aG85YjVmbW8xN2Y6N2ZmMzNhZjMtYjdmOS00ZDFmLTk5NGEtY2YyQ5
uff-input-dims=3;640;640;0
uff-input-blob-name=input_image
batch-size=1

0=FP32, 1=INT8, 2=FP16 mode

network-mode=2
num-detected-classes=2
interval=0
gie-unique-id=1
is-classifier=0
#network-type=0
output-blob-names=dense_regress_td/BiasAdd;dense_class_td/Softmax;proposal
parse-bbox-func-name=NvDsInferParseCustomFrcnnTLT
custom-lib-path=./nvdsinfer_customparser_frcnn_tlt/libnvds_infercustomparser_frcnn_tlt.so

[class-attrs-all]
#pre-cluster-threshold=0.6
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

What I am missing or doing wrong??
do I need to do anything with the uff parser?
please give me a direction!

Thanks in advance

@mekong0404

Sorry for late response.
It seems your GPU’s compute capability is only 5.0, it may not support FP16 mode.
You can go to this page to get some information about NVIDIA’s FP16 supports.

Have you tried FP32 mode?
Simply change network-mode from 2 to 0.

# 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2  # Change this to 0

Yes i did fp32, its same error.

@mekong0404

The error occurred while input dimensions are being checked.
What is the input shape of your model especially the batch size? Is it the same as [batch=1, ch=3, H=640, W=640] as it is configured?

BTW, uff-input-dims is now deprecated. You can try infer-dims and uff-input-order instead.

See:
Gst-nvinfer documentation