Running Transfer Learning Toolkit on local machine

Hi Guys,

I would like to know if it is possible to run Transfer Learning Toolkit on local machine or can it be only run on cloud?

Thanks.

TLT can run at local machine via docker. See more details in released doc.Thanks.

Can we run TLT on Jetson Nano with JetPack?

Hi rsiu,
See Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation , using the Transfer Learning Toolkit should meet the hardware and software requirements.

For deployment etlt model or TRT engine in DS at Jetson Nano, please see https://devtalk.nvidia.com/default/topic/1065558/transfer-learning-toolkit/trt-engine-deployment/

Hi Morganh,

Thank you for your reply. I take it that we cannot run TLT on Jetson Nano then. Is there any tutorial on running TLT on AWS or other cloud provider?

BR,
Ricardo

Hi rsiu,
See tlt doc, especially Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation, you can download the docker and run it in your Ubuntu PC.

Hi Morganh,

I do not have a qualified local machine. I want to run this on the cloud (e.g. AWS). Any pointer?

BR,
Ricardo

You can try if you can run in ngc cloud (ngc.nvidia.com).

How do you do that? Any tutorial? This is my NGC account and what I see after login:

[url]ngc.PNG - Google Drive

Below is a good start point for ngc.