Deepstream can't create the .engine from the .etlt, using tlt3.0 a custom Mask-Rcnn model

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
GPU gtx3060
• DeepStream Version
5.1
• TensorRT Version
7.2.3
• NVIDIA GPU Driver Version (valid for GPU only)
460.73.01
• Issue Type( questions, new requirements, bugs)
question

I am working on the docker for deepstream5.1. I CANNOT upgrade the version, please do not suggest that.

I trained a Mask-Rcnn model using TLT on a custom dataset, now I need to use that model on deepstream.
Since that version of deepstream doesn’t support mask-rcnn, I compiled a parser, following [these instructions].(Deploying to DeepStream for MaskRCNN - NVIDIA Docs)

I could compile with no problem, and changed my config_infer file to the following.

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=<labels path>/labels.txt
tlt-encoded-model=<model path>/model.step-25000.etlt
#model-engine-file= <once it is generated I will add it here>
tlt-model-key=<secret key>
uff-input-dims=3;1024;1920;0
uff-input-blob-name=Input
batch-size=1
#network-mode=2
num-detected-classes=5
interval=0
gie-unique-id=1
is-classifier=0

## parser
output-blob-names=generate_detections;mask_fcn_logits/BiasAdd
cluster-mode=4
network-type=3 ## 3 is for instance segmentation network
output-instance-mask=1
parse-bbox-instance-mask-func-name=NvDsInferParseCustomMrcnnTLT
custom-lib-path=/tmp/deepstream_tlt_apps/post_processor/libnvds_infercustomparser_tlt.so

[class-attrs-all]
pre-cluster-threshold=0.6
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=35
detected-min-h=35
#detected-max-w=1000
detected-max-h=850

my problem is that deepstream is not able to generate the engine file, it fails with the following error

[NvDCF] Initialized
0:00:00.272590254  5144 0x55df4f6f0f90 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: UffParser: Output error: Output mask_fcn_logits/BiasAdd not found
parseModel: Failed to parse UFF model
ERROR: tlt/tlt_decode.cpp:274 failed to build network since parsing model errors.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API
0:00:01.537341226  5144 0x55df4f6f0f90 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 1]: build engine file failed
corrupted size vs. prev_size
Aborted (core dumped)

One thing I noticied is that in the guide, the following is written:
parse-bbox-instance-mask-func-name=NvDsInferParseCustomMrcnnTLT

but on the source code of that git, that function does not exist. Instead there is NvDsInferParseCustomMrcnnTLTV2

So, the questions are:
Which tag of the git should I be using? (instructions say git clone -b release/tlt3.0 GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream)
or, which tutorial should I be following?

thank you.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

as the log shown, it is because of “Output mask_fcn_logits/BiasAdd not found”, please correct output-blob-names in the configuration file.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.