Validator error: FirstDimTile_4: Unsupported operation _BatchTilePlugin_TRT

I’ve created my model using TLT and wanted to create the engine on my jetson nano.

Then I type this command:

keeper@keeper-desktop:~/Desktop/Yoskev$ ./tlt-converter -k bjdtNHBlYXIwZ3Z2YW1scDg2ZHZzN3FkMXY6MTVhNDg1ZTYtNDUyNC00YTUwLTg0NWUtOTRhYWIzMDAzxxxx -o NMS -d 3,480,640 -e /home/keeper/Desktop/Yoskev/SSD/ssd_resnet18_epoch_180.etlt

and it show this:

[ERROR] UffParser: Validator error: FirstDimTile_4: Unsupported operation _BatchTilePlugin_TRT
[ERROR] Failed to parse uff model
[ERROR] Network must have at least one output
[ERROR] Unable to create engine
Segmentation fault (core dumped)

could you help me with this?

1 Like

I am facing the same error while converting SSD trained in TLT into Jetson Nano.

The error means that the BatchTilePlugin_TRT was not included in the local libnvinfer_plugin.so which was linked to tlt-converter, so that the tlt-converter failed to parse the plugin node.

You need to check the TRT version you used for training and converting, make sure those are the same version.

2 Likes

hi, I met the same error. Did you solve this problem?

I’m facing the exact same error using the .etlt generated by the example SSD in TLT (tlt-streamanalytics:v2.0_dp_py2)

Passing the .etlt to tlt-converter on nano (downloaded from https://developer.nvidia.com/tlt-converter) , or passing it directly to DeepStream (and have it do the conversion in the background) raise the exact same error:

[ERROR] UffParser: Validator error: FirstDimTile_4: Unsupported operation _BatchTilePlugin_TRT

Oddly enough, TRT in Nano (TRT 7.1.0.16) is newer than the one in TLT (TRT 7.0.0.11)

1 Like

hi all:
try to update tensort OSS:
https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#tensorrt_oss
libnvinfer_plugin.so need update to include TRT plugin

1 Like

Hello, I used TRT7.0.0 when training, and TRT7.1.3 when I deployed on jetson nano. May I ask TRT to be compatible from high version to low version?

1 Like

This error happens to me when using the deepstream 5.1 devel docker image with the deepstream_tlt_apps repo and a yolov4 model trained with tlt.
This shouldn’t be the case! The docker image should already be updated with the needed tensorrt OSS to convert and run tlt models!

1 Like

Hi,
This looks like a Jetson issue. We recommend you to raise it to the respective platform from the below link

Thanks!

HI @yoshuakevin and All,

Just answering here for reference,

The answer to this has two parts;

  1. Install the TensorRT OSS build

  2. Write a custom parser

if you want something like a rough guide, Please refer to these two questions

TAO to Jetson Deepstream conversion
ONNX to Jetson Deepstream conversion

I had the same issue and that is how I fixed it!

Cheers,
Ganindu.