I have developed a semantic segmentation model using the unet model on my computer with ubuntu 18.04 operating system.
My purpose is to deploy this model to the Triton Inference Server on Jetson devices. Triton inference server accepts tensortRT files with .plan extension. But when I export the model I developed in TAO, I get a tensorRT file with .engine extension.
How could I convert this .engine file to the .plan file for deploying the model to the Triton Inference server on Jetson devices.