Hi martin.bufi
For detection network, there are three kinds of networks. SSD,Faster-rcnn and DetectNet_V2.
All of them have their backbones.
SSD backbone: ResNet18,ResNet10,
Faster-rcnn backbone: VGG19,VGG16,ResNet50,ResNet18,ResNet10,MobileNet_V2,MobileNet_V1,GoogLenet
DetectNet_V2 backbone: VGG19,VGG16,ResNet50,ResNet18,ResNet10,MobileNet_V2,MobileNet_V1,GoogLenet
That is great news! When using TlT how does one decide on the actual backbone? The object detection list only shows the classification networks but not option on selecting SSD vs. Faster RCNN vs. DetectNetV2 etc.
Could you let me know how this is done?
WHen pullling the latest models :
tlt-pull -k $API_KEY -lm -o nvtltea -t iva
I only see “objectdetection_Classificationwork” etc. There is nothing in the list that mentions the back bone. Am I looking at the wrong thing?
Hi Martin,
Pre-trained models are available in NGC.
We will release TLT GA version soon. For this version, type below command and get a list of models.
$ ngc registry model list --org --team
For example, below shows that it is a FasterRCNN object detection network. Its backbone is ResNet18.
±-----------------------±-----------------------------------±---------------+
| Name | Repository | Application |
±-----------------------±-----------------------------------±---------------+
| TLT ResNet18 FasterRCNN| nvidia/iva/tlt_resnet18_faster_rcnn|Object Detection|
I have a few last questions before I stop bugging you:
When using the TlT docker image, and running
ngc registry model list --org nvtltea --team iva
It mentions that ngc command not found.
I was going through the documentation and noticed that the object detection portion of examples just references resnet18 with no actual tell to which backbone is used. with the new GA version, will it come with examples of how to use TLT with FasterRCNN? (since there is a lot of config required by tlt to make object detection work) The Architectures between ssd and fasterrcnn are vastly different.
I was unable to export a classification model to the Jetson Xavier. With the new GA release, will it come with an ARM .exe?
Nvidia mentions that TLT will a good way to train models for Deepstream. Will these models also be able to be used in regular python/c++ code? You need a TensorRT Engine to run the model and I would love to see an example in Python that shows loading this obj det model and then performing live inference on a camera feed or even just an image to get back detections.
I love the idea of this project, and currently I am unable to integrate Deep stream into the pipeline. Being able to use this project with my code base would be very beneficial.
Hi Martin,
For 1),
“ngc” is a client tool. You can consider it as command line interface. Could you check if you can download it via “wget https://ngc.nvidia.com/downloads/ngccli_reg_linux.zip”. If not, the coming TLT GA docker contains it under “opt/ngccli/”
$ which ngc
/opt/ngccli/ngc
For 2),
Different kinds of pre-trained models can tell which backbone is used.
For GA version, there is jupyter notebook how to use TLT with FasterRCNN.
For 3),
No, there is not any exe format file. But I want to know why you are unable to export the model to Xavier. Actually it should be working.
For 4),
From notebooks, you can see there are some examples in Python. It do inference and visualize inference again images.
Hi Martin,
For 3), yes,the converter utility included in the docker only works for x86 devices.
New coming release will ask end users to download the converter from one link. It works for jetson devices.
GA will release soon but I’m not sure the exact date.
Hope all is well. I just pulled the latest docker image. I can see all models you mentioned and there is a fasterRCNN example notebook.
Questions for you:
Where is the link to download the converter for jetson devices?
I see no example on how to use python code to run inference with the TLT models we trained. When doing inference in the notebook, it is using a commandline keywords
tlt-infer
which does not work in my case since I need to embed this is an existing python project.
3. Do the newly trained fasterRCNN models we train here work on the latest TensorRT6.0? Is there any python code examples on how to grab the newly trained tlt models, build a TensorRT engine and then run inference on images?
For 2),
Sorry,in the notebook, doing inference with custom python code is not supported now. We only expose the tlt-infer command line to generate inferences on the models.
Secondly,we can support inference in Deepstream with TRT engine or etlt model. The trt engine can be generated by tlt-converter tool against etlt model. The etlt model is generated by tlt-export tool.
How to run inference using the trained FasterRCNN models from TLT is provided on github: GitHub - NVIDIA-AI-IOT/deepstream_4.x_apps: deepstream 4.x samples to deploy TLT training models . The pre-processing and post processing code are already exposed in C++ inside nvdsinfer_customparser_frcnn_uff folder.
For 3)
See above mentioned github, we only tested on TRT 5.1GA. TRT 6.0 should more or less also work but we can’t guarantee that. Currently, we only provide DS sample in GitHub, based on DS 4.0.
Is there any sample python code you have that shows how I can generate a TensorRT engine from an existing etlt model?
2)If I can generate a trt engine with the tlt-converter tool against an etlt model, am I able to then load that engine with python? If so, is there documentation on how to do so?
Hi Martin,
1)For the Jetson platform, once the tlt-converter is downloaded from dev-zone, you can use it to generate a TensorRT engine against the etlt model. Python code sample is not needed.
For example, tlt-export generated frcnn_kitti_retrain.int8.etlt and cal.bin in INT8 mode. Then below command shows how to generate a TRT engine(INT8).