[Question] Is it possible to take a .pt model and export it into onnx/trt format using TAO Toolkit?

I had tried exporting according to this but failed using both dynamo_export and export. Wonder if it is possible to try exporting through TAO.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

For the pth model which is trained via TAO Toolkit, it is sure to export it to onnx file.
For example, https://github.com/NVIDIA/tao_pytorch_backend/blob/e5010af08121404dfb696152248467eee85ab3a7/nvidia_tao_pytorch/cv/dino/scripts/export.py,
https://github.com/NVIDIA/tao_pytorch_backend/blob/e5010af08121404dfb696152248467eee85ab3a7/nvidia_tao_pytorch/cv/metric_learning_recognition/scripts/export.py, etc.

The command line can be found in corresponding user guide or notebook.
Refer to DINO - NVIDIA Docs.
Or https://github.com/NVIDIA/tao_tutorials/blob/95aca39c79cb9068593a6a9c3dcc7a509f4ad786/notebooks/tao_launcher_starter_kit/dino/dino.ipynb.

There is several docker from TAO. See TAO Toolkit | NVIDIA NGC. One is from pytorch backend.
You can login into the docker. And run the export command.
$ docker run --runtime=nvidia -it --rm nvcr.io/nvidia/tao/tao-toolkit:5.1.0-pyt /bin/bash,
Then inside the docker in the command without “tao” in the beginning. For example, for DINO network,
$ dino export xxx

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.