Transfer Learning Toolkit for Jetson Nano

Hi all,
I have some question about this tool kit. I’m new in this work, If I’m mistake, please correct me.
1- TLT is just run in docker container?
2- Is it possible to run tlt trained model in python code without extra deepstream package?
3- What’s a difference of LTL workflow with when we trained models with tensorflow and convert them to TensorRT and then run on jetson nano for inference?
4- When the model trained with TLT, for inference we also need to docker container again? I want to have simple python code for running the tlt trained model on jetson nano, is it possible?
5- Is it possible to use custom model for training with TLT?
6- The tlt tranied model is runned both deep stream sdk and TensorRT lib?If so, the TensorRT accepted uff/onnx/caffemodel parser, but tlt models are .etlt format, How? Probability I should pass .bin file, right?

7- Is it possible to run Deepstream 5 and TLT on GTX 1080? I saw write T4 and jetson.
8- For use the PeopleNet/FaceDetection/YOLOV3 of TLT, We need DeepStream5 close-source? Is it sossible to run this models in deep stream python apps sampel codes?

To enable faster and accurate AI training, NVIDIA just released highly accurate, purpose-built, pretrained models with the NVIDIA Transfer Learning Toolkit (TLT) 2.0. You can use these custom models as the starting point to train with a smaller dataset and reduce training time significantly. These purpose-built AI models can either be used as-is, if the classes of objects match your requirements and the accuracy on your dataset is adequate, or easily adapted to similar domains or use cases.

I see the above paragraph in the nvidia developer blog, Coming Soon

Transfer Learning Toolkit 2.0 General Availability (Q3, 2020)
what’s means?

Please share the nvidia developer blog link, thanks.

https://devblogs.nvidia.com/training-custom-pretrained-models-using-tlt/

I really need to answers of the above questions, If possible answers the questions. thanks

I want to check your last question firstly. I did not see “Coming Soon Transfer Learning Toolkit 2.0 General Availability (Q3, 2020)”. Could you please give more details?

https://developer.nvidia.com/tlt-getting-started

OK, that means TLT plans to release General Availability version at that time.
Currently, TLT already release the developer preview version docker in ngc.
2.0_dp version.

Thanks,
I’m looking forward to your answers.

Hi,
Sorry for late reply.

  1. If you run training, please run it in docker container only. After training, you will get tlt model or export it to etlt model. Then you can use this etlt model to run inference in an edge device or generated a trt engine directly in an edge device to run inference.

  2. With etlt model, you can generate a trt engine. Yes, you can run this trt egine in python code without extra deepstream package. Please also see NVIDIA Metropolis Documentation

  3. The process is similar. But tlt provides more features or functions. See more in tlt user guide or release note
    2.0_dp:
    https://docs.nvidia.com/metropolis/TLT/tlt-release-notes/index.html
    Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation
    1.0.1
    NVIDIA Metropolis Documentation
    NVIDIA Metropolis Documentation

  4. No. See my answer in (1). Yes, for how to use trt engine, you can search similar material in tensorrt samples or NVIDIA github.

  5. Sorry, it is not supported as of now. See more at NVIDIA Metropolis Documentation

  6. See more details in Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation

  7. Sure, it is possible

  8. Peoplenet and FaceDetection are already included inside Deepstream5. You can run inference directly with those pruned model. You can also use the unpruned model to train your own data in TLT. For yolov3, please see Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation . Yes, it is possible.

1 Like

Thanks so much.
What’s means DeepStream SDK 5 is close-source? not-free?

DeepStream5 is free and open source. Could you paste the link which you mentioned “DeepStream5 close-source”?

In the this talk, he says the some plugins in deep stream are open-source, For using other plugins, what do I do?

For other plugins, please search help from deepstream user guide or deepstream forum.

@Morganh
I believe you are incorrect about Deepstream being “open-source”. On the page for Deepstream, it specifically states that Deepstream is not open source in the FAQ.

Sorry for that. Thanks for your correction @harryhsl8c.
@LoveNvidia Please ignore my previous comment about deepstream.

Please see https://developer.nvidia.com/deepstream-sdk

DeepStream is a closed source SDK. Note that source for all reference applications and several plugins are available.