Environment
Baremetal or Container (if container which image + tag): nvcr.io/nvidia/deepstream:5.0-20.07-triton
TensorRT Version:
GPU Type: Tesla T4
Nvidia Driver Version: 450.51.05
CUDA Version: 10.2
CUDNN Version:
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Hi, I’m trying to integrate a custom trained tlt model into python deepstream 5.0
The custom tlt model was trained and converted to .etlt using completely unmodified Stream Analytics NGC container
nvcr.io/nvidia/tlt-streamanalytics:v2.0_py3
The deployment is being done on a completely unmodifed Deepstream 5.0 NGC container
nvcr.io/nvidia/deepstream:5.0-20.07-triton
However when trying to integrate the .etlt model to deepstream application code, i get the following error:
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: UffParser: Validator error: FirstDimTile_4: Unsupported operation _BatchTilePlugin_TRT
parseModel: Failed to parse UFF model
ERROR: tlt/tlt_decode.cpp:274 failed to build network since parsing model errors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API
0:00:14.934727996 1427 0x335ce10 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]:
Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
Fatal Python error: Segmentation fault
Any suggestion to resolve this would be helpful,
Thanks.