How to deploy the U-net model developed in the TAO toolkit to the Jetson device?

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc) : GTX 1050
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) : Unet
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here) TAO Toolkit toolkit_version: 3.21.08
• Training spec file(If have, please share here) spec.txt (1.2 KB)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

Hello,

I developed a semantic segmentation model using the unet model on my computer with ubuntu 18.04 operating system.

My purpose is to deploy this model to the Triton Inference Server on Jetson devices. Triton inference server accepts tensortRT files with .plan extension. But when I export the model I developed in TAO, I get a tensorRT file with .engine extension.

How could I convert this .engine file to the .plan file for deploying the model to the Triton Inference server on Jetson devices.

Thanks

You can rename xxx.engine into xxx.plan.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.