Benchmarking TLT YOLOv3

Hi Team,
I have been following the below mentioned github repo:
(GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream)
I have got the TLT Models from:
[https://nvidia.box.com/shared/static/i1cer4s3ox4v8svbfkuj5js8yqm3yazo.zip]

Below are the details:
Environment:
Device: Tesla T4
Cuda Version: 10.2
Tensorrt version: 7.0
Docker image: docker pull nvcr.io/nvidia/deepstream:5.0-20.07-triton

Command:
./ds-tlt -c /opt/nvidia/deepstream/deepstream-5.0/samples/deepstream_tlt_apps/configs/yolov3_tlt/pgie_yolov3_tlt_config.txt -i /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264 -b 2

Output Error:
WARNING: Overriding infer-config batch-size (1) with number of sources (2)
Now playing: /opt/nvidia/deepstream/deepstream-5.0/samples/deepstream_tlt_apps/configs/yolov3_tlt/pgie_yolov3_tlt_config.txt
0:00:00.737391560 396 0x563947a11610 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: UffParser: Validator error: FirstDimTile_2: Unsupported operation _BatchTilePlugin_TRT
parseModel: Failed to parse UFF model
ERROR: tlt/tlt_decode.cpp:274 failed to build network since parsing model errors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API
0:00:00.944565439 396 0x563947a11610 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
Segmentation fault (core dumped)

I have also followed:
(deepstream_tao_apps/TRT-OSS/Jetson at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub)

But still the error persists:
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-imagedata-multistream/yolov3_resnet18.etlt_b1_gpu0_fp16.engine open error
0:00:11.954855314 2771 0x22e4ac0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-imagedata-multistream/yolov3_resnet18.etlt_b1_gpu0_fp16.engine failed
0:00:11.954914143 2771 0x22e4ac0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-imagedata-multistream/yolov3_resnet18.etlt_b1_gpu0_fp16.engine failed, try rebuild
0:00:11.954933439 2771 0x22e4ac0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: UffParser: Could not read buffer.
parseModel: Failed to parse UFF model
ERROR: tlt/tlt_decode.cpp:274 failed to build network since parsing model errors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API
0:00:12.046943291 2771 0x22e4ac0 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 1]: build engine file failed
Segmentation fault (core dumped)

Please suggest a way, how can I benchmark yoloV3 Model?
Note: Screenshot attached.
Regards,
Madhurima
Reliance Jio

Please refer to TLT-deepstream sample app problems : I found thatFRCNN, SSD , DSSD , RetinaNet and Detectnet_v2 can run successfully, but Yolov3 can’t - #21 by hyperlight .
More searching result can be found at Search results for 'Unsupported operation _BatchTilePlugin_TRT parseModel: Failed to parse UFF model #intelligent-video-analytics:transfer-learning-toolkit order:latest_topic' - NVIDIA Developer Forums