How to apply ONNX models to TAO Toolkit?

I’d like to apply ONNX models to TAO Toolkit.
I found TAO BYOM Converter, however the document says that BYOM supports only Classification and UNet and does not support others like Object Detection.
If I use ONNX model with TAO, am I supposed to use TAO BYOM Converter TF2?
I read the document and Jupyter Notebooks of TAO Toolkit, but I could not reach how to and another way except using TAO BYOM Converter.
And I could not find any Jupyter Notebook this video says.
Would you mind letting me know the specific way to apply ONNX model to TAO?

Ubuntu: 22.04
TAO Toolkit: 5.3

Yes, for 3rd-party onnx file, currently only Classification and UNet are supported.

There is an example in GitHub - NVIDIA-AI-IOT/tao_byom_examples: Examples of converting different open-source deep learning models to TAO compatible format through TAO BYOM package..

@Morganh
Thank you for your reply.

Yes, for 3rd-party onnx file, currently only Classification and UNet are supported.

I see, so I cannot use ONNX models with TAO Toolkit if they are not the models of Classification and UNet.

There is an example in GitHub - NVIDIA-AI-IOT/tao_byom_examples: Examples of converting different open-source deep learning models to TAO compatible format through TAO BYOM package..

I had read the GitHub page, but I could not understand apparently.
So, is all I have to do moving .tltb files into $LOCAL_EXPERIMENT_DIR/pretrained_resnet18/ and run the Jupyter Notebook?

And is the web page mainteined?
I clicked several url links, but most of them were 404 pages.

Please refer to the latest 5.3.0 notebook.

Please refer to the latest 5.3.0 notebook.

Thank you so much.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.