Error during deserializing the engine which is generated by TLT

I am using Jetson Xavier with 4.2 Jetpack, cuda version is 10.0, tensorrt 5.1.6.1.

According to tlt introduction, I trained a SSD resnet18 model which located in NGC. But encountered some problems.

1, First problem: tlt-convert failed

nvidia@nvidia-desktop:~/samba/temp/tlt-converter-trt5.1$ ./tlt-converter -k $KEY -d 3,384,1248 -o NMS -e ssd-trt.engine ssd_resnet18_epoch_180.etlt

[ERROR] UffParser: Validator error: FirstDimTile_4: Unsupported operation _BatchTilePlugin_TRT

[ERROR] Failed to parse uff model

[ERROR] Network must have at least one output

[ERROR] Unable to create engine

Segmentation fault (core dumped)

It tells that _BatchTilePlugin_TRT not defined.

Then I compiled the TensorRT OSS( https://github.com/NVIDIA/TensorRT ) with branch of 5.1. compile successfully, replace the libnvinfer_plugin libs in /usr/lib/aarch64-linux-gnu.

Later I run tlt-converter successfully and got the engine file.

nvidia@nvidia-desktop:~/samba/temp/tlt-converter-trt5.1$ ./tlt-converter -k $KEY -d 3,384,1248 -o NMS -e ssd-trt.engine ssd_resnet18_epoch_180.etlt

[INFO] UFFParser: parsing Input

[INFO] UFFParser: Applying order forwarding to: Input

[INFO] UFFParser: parsing conv1/kernel

[INFO] UFFParser: Applying order forwarding to: conv1/kernel

[INFO] UFFParser: parsing conv1/convolution

[INFO] UFFParser: Applying order forwarding to: conv1/convolution

[INFO] UFFParser: parsing conv1/bias

[INFO] UFFParser: Applying order forwarding to: conv1/bias

[INFO] UFFParser: parsing conv1/BiasAdd

[INFO] UFFParser: Applying order forwarding to: conv1/BiasAdd

[INFO] UFFParser: parsing bn_conv1/moving_variance

[INFO] UFFParser: Applying order forwarding to: bn_conv1/moving_variance

[INFO] UFFParser: parsing bn_conv1/Reshape_1/shape

[INFO] UFFParser: Applying order forwarding to: bn_conv1/Reshape_1/shape

[INFO] UFFParser: parsing bn_conv1/Reshape_1

[INFO] UFFParser: Applying order forwarding to: bn_conv1/Reshape_1

[INFO] UFFParser: parsing bn_conv1/batchnorm/add/y

[INFO] UFFParser: Applying order forwarding to: bn_conv1/batchnorm/add/y

[INFO] UFFParser: parsing bn_conv1/batchnorm/add

……

[INFO] Block size 46080

[INFO] Block size 15360

[INFO] Total Activation Memory: 1431005696

[INFO] Detected 1 input and 2 output network tensors.

[INFO] Data initialization and engine generation completed in 0.14853 seconds.

2,But I found another problem in deserializing the engine file.

nvidia@nvidia-desktop:~/samba/temp/resnet10_SSD-TensorRT/build/bin$ ./resnet_ssd

loading network profile from cache…

createInference

getPluginCreator could not find plugin BatchTilePlugin_TRT version 1 namespace

Cannot deserialize plugin BatchTilePlugin_TRT

getPluginCreator could not find plugin BatchTilePlugin_TRT version 1 namespace

Cannot deserialize plugin BatchTilePlugin_TRT

getPluginCreator could not find plugin BatchTilePlugin_TRT version 1 namespace

Cannot deserialize plugin BatchTilePlugin_TRT

getPluginCreator could not find plugin BatchTilePlugin_TRT version 1 namespace

Cannot deserialize plugin BatchTilePlugin_TRT

getPluginCreator could not find plugin BatchTilePlugin_TRT version 1 namespace

Cannot deserialize plugin BatchTilePlugin_TRT

getPluginCreator could not find plugin BatchTilePlugin_TRT version 1 namespace

Cannot deserialize plugin BatchTilePlugin_TRT

getPluginCreator could not find plugin NMS_TRT version 1 namespace

Cannot deserialize plugin NMS_TRT

Segmentation fault (core dumped)

It seems that BatchTilePlugin_TRT not work at all. But I compile and replace the libnvinfer_plugin libs which should include it. Is there any wrong in these procedures? Or any more step should do?

Please file this topic under TLT forum.
https://forums.developer.nvidia.com/c/accelerated-computing/intelligent-video-analytics/transfer-learning-toolkit/17

If you already file it, please ignore.