Hi,
An ONNX format is enough.
You can convert it to TensorRT engine with the following command:
$ /usr/src/tensorrt/bin/trtexec --onnx=[your/model] --saveEngine=model.trt
More, if half-precision is acceptable, you can also convert it with fp16 mode for better performance.
$ /usr/src/tensorrt/bin/trtexec --onnx=[your/model] --saveEngine=model.trt --fp16
Thanks.