ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: UffParser: Validator error: FirstDimTile_4:

im using docker container nvcr.io/nvidia/deepstream:5.1-21.02-base

config file , i have placed the model and label files in the corresponding path below .




[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=0



labelfile-path=/opt/nvidia/deepstream/deepstream-5.1/src/Deepstream-PoC/models/Primary_Bottle_SSD/ssd_labels.txt
model-engine-file=/opt/nvidia/deepstream/deepstream-5.1/src/Deepstream-PoC/models/Primary_Bottle_SSD/ssd_resnet18_retrained_epoch_040_bo_99_bl_94_rej_84.etlt_b1_gpu0_fp32.engine
tlt-encoded-model=/opt/nvidia/deepstream/deepstream-5.1/src/Deepstream-PoC/models/Primary_Bottle_SSD/ssd_resnet18_retrained_epoch_040_bo_99_bl_94_rej_84.etlt

tlt-model-key=nvidia_tlt
infer-dims=3;300;300
uff-input-order=0
maintain-aspect-ratio=1
uff-input-blob-name=Input
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=4
interval=0
gie-unique-id=1
is-classifier=0
#network-type=0
output-blob-names=NMS
parse-bbox-func-name=NvDsInferParseCustomNMSTLT
custom-lib-path=/home/ubuntu/PoC/customLib/libnvds_infercustomparser_tlt.so
classifier-async-mode=0

[class-attrs-all]
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0




[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=3
codec=1
sync=1
source-id=0
gpu-id=0
nvbuf-memory-type=0
display-text=1

below is the error

0:00:00.227926915 11029 0x55d94f955200 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: UffParser: Validator error: FirstDimTile_4: Unsupported operation _BatchTilePlugin_TRT
parseModel: Failed to parse UFF model
ERROR: tlt/tlt_decode.cpp:274 failed to build network since parsing model errors.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API
0:00:00.805065781 11029 0x55d94f955200 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 1]: build engine file failed
Segmentation fault (core dumped)

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)


Hi h9945394143,

Is this still an issue to support? Any result can be shared?

im trying to generate an Model engine file ,with below “tlt-encoded-model”


gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=0
labelfile-path=../../model/labels.txt
#model-engine-file=../../../../../../../../home/ubuntu/PoC/model/Primary_Bottle_SSD/ssd_resnet18_retrained_epoch_040_bo_99_bl_94_rej_84.etlt_b1_gpu0_fp32.engine
tlt-encoded-model=../../model/yolov4_resnet18_epoch_050.etlt
#int8-calib-file=../../../../../../../../home/ubuntu/PoC/models/ssd/cal.bin
tlt-model-key=nvidia_tlt
infer-dims=3;300;300
uff-input-order=0
maintain-aspect-ratio=1
uff-input-blob-name=Input
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=2
interval=0
gie-unique-id=1
is-classifier=0
#network-type=0
output-blob-names=NMS
parse-bbox-func-name=NvDsInferParseCustomNMSTLT
custom-lib-path=../../customLib/libnvds_infercustomparser_tlt.so
classifier-async-mode=0

[class-attrs-all]
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

Seems “Unsupported operation _BatchTilePlugin_TRT” error from below error log. Can you try latest TRT plugins in: deepstream_tao_apps/TRT-OSS/x86 at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub

ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: UffParser: Validator error: FirstDimTile_4: Unsupported operation _BatchTilePlugin_TRT