[TRT]: UffParser: Unsupported operation _BatchTilePlugin_TRT

Environment

Baremetal or Container (if container which image + tag): nvcr.io/nvidia/deepstream:5.0-20.07-triton
TensorRT Version:
GPU Type: Tesla T4
Nvidia Driver Version: 450.51.05
CUDA Version: 10.2
CUDNN Version:
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable):
PyTorch Version (if applicable):

Hi, I’m trying to integrate a custom trained tlt model into python deepstream 5.0

The custom tlt model was trained and converted to .etlt using completely unmodified Stream Analytics NGC container
nvcr.io/nvidia/tlt-streamanalytics:v2.0_py3

The deployment is being done on a completely unmodifed Deepstream 5.0 NGC container
nvcr.io/nvidia/deepstream:5.0-20.07-triton

However when trying to integrate the .etlt model to deepstream application code, i get the following error:
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: UffParser: Validator error: FirstDimTile_4: Unsupported operation _BatchTilePlugin_TRT
parseModel: Failed to parse UFF model

ERROR: tlt/tlt_decode.cpp:274 failed to build network since parsing model errors.

ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function

ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API
0:00:14.934727996 1427 0x335ce10 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]:

Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
Fatal Python error: Segmentation fault

Any suggestion to resolve this would be helpful,
Thanks.

Hi @shubhaamm,
I hope the below link should be able to answer your query.

Thanks!

Thanks for the quick reply, i went through the above linked thread, i verified the

  1. TRT version of Deepstream5.0 container used for deployment

  2. TRT version of TLT container used for training and exporting the .etlt model

are same…i.e 7.0.0.0

I also noticed one more thing, the _BatchTilePlugin_TRT operation that is causing the error, itself was introduced in the TensoRT 7.1 release…(as seen from the github page)

My TLT container is using TensorRT 7.0.0-1 then how does the fine-tuned .etlt model file contains this operation ?

To solve my problem, tried to update the TRT version in my Deepstream5.0 container to TRT 7.1.3.4-ga, however that too failed with the folowing error:

The following packages have unmet dependencies:
tensorrt : Depends: libnvinfer7 (= 7.1.3-1+cuda10.2) but 7.0.0-1+cuda10.2 is to be installed
Depends: libnvinfer-plugin7 (= 7.1.3-1+cuda10.2) but 7.0.0-1+cuda10.2 is to be installed
Depends: libnvparsers7 (= 7.1.3-1+cuda10.2) but 7.0.0-1+cuda10.2 is to be installed
Depends: libnvonnxparsers7 (= 7.1.3-1+cuda10.2) but 7.0.0-1+cuda10.2 is to be installed
Depends: libnvinfer-bin (= 7.1.3-1+cuda10.2) but 7.0.0-1+cuda10.2 is to be installed
Depends: libnvinfer-dev (= 7.1.3-1+cuda10.2) but 7.0.0-1+cuda10.2 is to be installed
Depends: libnvinfer-plugin-dev (= 7.1.3-1+cuda10.2) but 7.0.0-1+cuda10.2 is to be installed
Depends: libnvparsers-dev (= 7.1.3-1+cuda10.2) but 7.0.0-1+cuda10.2 is to be installed
Depends: libnvonnxparsers-dev (= 7.1.3-1+cuda10.2) but 7.0.0-1+cuda10.2 is to be installed
Depends: libnvinfer-samples (= 7.1.3-1+cuda10.2) but it is not going to be installed
Depends: libnvinfer-doc (= 7.1.3-1+cuda10.2) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.

To solve the above problem i am following the steps here:

But unfortunately i have reached a complete dead end as there is no cuda10.2 for my Nvidia driver version 450.51.05.

Can you suggest how do i move forward from here ?

Hi @shubhaamm,
I suggest you to raise this query in deepstream forum, as the team will be able to help you better in this case.
Thanks!

Hi,
I also encountered a similar problem, how did you solve it later?
Thanks