How do I import the trained model from TLT to Triton?

I had trained a ResNet50 model using my own dataset for image classification via NVIDIA TLT. The output of the toolkit is resnet50.etlt and resnet50.trt. I’m trying to export the model to NVIDIA triton server, however, the server requires a model.plan format file and a config file.
How can I deploy my model to triton server smoothly, it seems that I have to do some kind of work to transfer my resnet50.trt into model.plan for the server.

I’m using
TLT docker image: nvcr.io/nvidia/tlt-streamanalytics:v3.0-dp-py3 (TRT:7.2.1)
Triton docker image: nvcr.io/nvidia/tritonserver:20.12-py3 (TRT:7.2.2)

System:
Ubuntu 16.04
NVIDIA Tesla V100

A TensorRT model definition is called a Plan . A TensorRT Plan is a single file that by default must be named model.plan.
A simple way is that in the triton server, you can use tlt-converter to generate resnet50.trt engine and rename to model.plan

Please refer to server/model_repository.md at r20.12 · triton-inference-server/server · GitHub
Using TLT models with Triton Inference Server - #6 by Morganh
and
https://developer.nvidia.com/blog/nvidia-serves-deep-learning-inference/