[ERROR] UffParser: Validator error: FirstDimTile_4: Unsupported operation _BatchTilePlugin_TRT
[ERROR] Failed to parse uff model
[ERROR] Network must have at least one output
[ERROR] Unable to create engine
Segmentation fault (core dumped)
The error means that the BatchTilePlugin_TRT was not included in the local libnvinfer_plugin.so which was linked to tlt-converter, so that the tlt-converter failed to parse the plugin node.
You need to check the TRT version you used for training and converting, make sure those are the same version.
I’m facing the exact same error using the .etlt generated by the example SSD in TLT (tlt-streamanalytics:v2.0_dp_py2)
Passing the .etlt to tlt-converter on nano (downloaded from https://developer.nvidia.com/tlt-converter) , or passing it directly to DeepStream (and have it do the conversion in the background) raise the exact same error:
This error happens to me when using the deepstream 5.1 devel docker image with the deepstream_tlt_apps repo and a yolov4 model trained with tlt.
This shouldn’t be the case! The docker image should already be updated with the needed tensorrt OSS to convert and run tlt models!