Validator error: FirstDimTile_4: Unsupported operation _BatchTilePlugin_TRT

I’ve created my model using TLT and wanted to create the engine on my jetson nano.

Then I type this command:

keeper@keeper-desktop:~/Desktop/Yoskev$ ./tlt-converter -k bjdtNHBlYXIwZ3Z2YW1scDg2ZHZzN3FkMXY6MTVhNDg1ZTYtNDUyNC00YTUwLTg0NWUtOTRhYWIzMDAzxxxx -o NMS -d 3,480,640 -e /home/keeper/Desktop/Yoskev/SSD/ssd_resnet18_epoch_180.etlt

and it show this:

[ERROR] UffParser: Validator error: FirstDimTile_4: Unsupported operation _BatchTilePlugin_TRT
[ERROR] Failed to parse uff model
[ERROR] Network must have at least one output
[ERROR] Unable to create engine
Segmentation fault (core dumped)

could you help me with this?

1 Like

I am facing the same error while converting SSD trained in TLT into Jetson Nano.

The error means that the BatchTilePlugin_TRT was not included in the local libnvinfer_plugin.so which was linked to tlt-converter, so that the tlt-converter failed to parse the plugin node.

https://github.com/NVIDIA/TensorRT/tree/master/plugin/batchTilePlugin

You need to check the TRT version you used for training and converting, make sure those are the same version.

hi, I met the same error. Did you solve this problem?

I’m facing the exact same error using the .etlt generated by the example SSD in TLT (tlt-streamanalytics:v2.0_dp_py2)

Passing the .etlt to tlt-converter on nano (downloaded from https://developer.nvidia.com/tlt-converter) , or passing it directly to DeepStream (and have it do the conversion in the background) raise the exact same error:

[ERROR] UffParser: Validator error: FirstDimTile_4: Unsupported operation _BatchTilePlugin_TRT

Oddly enough, TRT in Nano (TRT 7.1.0.16) is newer than the one in TLT (TRT 7.0.0.11)

hi all:
try to update tensort OSS:
https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#tensorrt_oss
libnvinfer_plugin.so need update to include TRT plugin