3 Methods for Speeding up AI Model Development with TAO Toolkit

Originally published at: https://developer.nvidia.com/blog/3-methods-speeding-up-ai-model-development-tao-toolkit-whitepaper/

How do you shorten your AI application’s TTM? Here are 3 methods to eliminate framework complexity and cut your training time in half using TAO Toolkit.

How Do I use the data produced by the "Process&Train_Helmet.ipynb " jupyter notebook to make the TAO export function produce and OUTPUT folder that contains the info to build an engine on my Xavier NX

Hi @adventuredaisy ,

You can use the exported .etlt model and then use the tao-converter on NX to convert (ref link for the jetpack version you have on NX: https://developer.nvidia.com/tao-toolkit-get-started ) this .etle model to TensorRT engine.

Here is a video walk thru of the jupyter notebook
“Experiment 3 Add new classes of objects to an existing AI model”
from the Nvidia Tech blog
" 3 Methods for Speeding up AI Model Development with TAO Toolkit"
There is also a link with the video to my github repo that contains the jupyter notebook
from this example with additional cells added so you can export to run on Xavier NX