Information about the detection networks

To better understand TLTs use cases, I would like to know what the back end object detection networks are.

SSD, Yolo, FasterRCNN, Retinanet?

With my experience in deploying AI, each network has its place and I would like to know more details about this.

Thanks!

Hi martin.bufi
For detection network, there are three kinds of networks. SSD,Faster-rcnn and DetectNet_V2.
All of them have their backbones.
SSD backbone: ResNet18,ResNet10,
Faster-rcnn backbone: VGG19,VGG16,ResNet50,ResNet18,ResNet10,MobileNet_V2,MobileNet_V1,GoogLenet
DetectNet_V2 backbone: VGG19,VGG16,ResNet50,ResNet18,ResNet10,MobileNet_V2,MobileNet_V1,GoogLenet

Hello,

That is great news! When using TlT how does one decide on the actual backbone? The object detection list only shows the classification networks but not option on selecting SSD vs. Faster RCNN vs. DetectNetV2 etc.

Could you let me know how this is done?

WHen pullling the latest models :

tlt-pull -k $API_KEY -lm -o nvtltea -t iva

I only see “objectdetection_Classificationwork” etc. There is nothing in the list that mentions the back bone. Am I looking at the wrong thing?

Thanks,
Martin

Hi Martin,
Pre-trained models are available in NGC.
We will release TLT GA version soon. For this version, type below command and get a list of models.
$ ngc registry model list --org --team

For example, below shows that it is a FasterRCNN object detection network. Its backbone is ResNet18.
±-----------------------±-----------------------------------±---------------+
| Name | Repository | Application |
±-----------------------±-----------------------------------±---------------+
| TLT ResNet18 FasterRCNN| nvidia/iva/tlt_resnet18_faster_rcnn|Object Detection|

Hi Morganh,

Thank you for the prompt responses!

I have a few last questions before I stop bugging you:

  1. When using the TlT docker image, and running
ngc registry model list --org nvtltea --team iva

It mentions that ngc command not found.

  1. I was going through the documentation and noticed that the object detection portion of examples just references resnet18 with no actual tell to which backbone is used. with the new GA version, will it come with examples of how to use TLT with FasterRCNN? (since there is a lot of config required by tlt to make object detection work) The Architectures between ssd and fasterrcnn are vastly different.

  2. I was unable to export a classification model to the Jetson Xavier. With the new GA release, will it come with an ARM .exe?

  3. Nvidia mentions that TLT will a good way to train models for Deepstream. Will these models also be able to be used in regular python/c++ code? You need a TensorRT Engine to run the model and I would love to see an example in Python that shows loading this obj det model and then performing live inference on a camera feed or even just an image to get back detections.

I love the idea of this project, and currently I am unable to integrate Deep stream into the pipeline. Being able to use this project with my code base would be very beneficial.

Thanks for all the hard work!

Hi Martin,
For 1),
“ngc” is a client tool. You can consider it as command line interface. Could you check if you can download it via “wget https://ngc.nvidia.com/downloads/ngccli_reg_linux.zip”. If not, the coming TLT GA docker contains it under “opt/ngccli/”
$ which ngc
/opt/ngccli/ngc

For 2),
Different kinds of pre-trained models can tell which backbone is used.
For GA version, there is jupyter notebook how to use TLT with FasterRCNN.

For 3),
No, there is not any exe format file. But I want to know why you are unable to export the model to Xavier. Actually it should be working.

For 4),
From notebooks, you can see there are some examples in Python. It do inference and visualize inference again images.

Hi Morgan,

For 2) Great, I look forward to seeing this.

For 3)

I refer to this post

For 4) Great.

Thank you for this information! This is good news. Could you let me know when the approx. date of GA will come out?

Thank you,
Martin

Hi Martin,
For 3), yes,the converter utility included in the docker only works for x86 devices.
New coming release will ask end users to download the converter from one link. It works for jetson devices.

GA will release soon but I’m not sure the exact date.

Hi Morganh,

Hope all is well. I just pulled the latest docker image. I can see all models you mentioned and there is a fasterRCNN example notebook.

Questions for you:

  1. Where is the link to download the converter for jetson devices?
  2. I see no example on how to use python code to run inference with the TLT models we trained. When doing inference in the notebook, it is using a commandline keywords
tlt-infer

which does not work in my case since I need to embed this is an existing python project.
3. Do the newly trained fasterRCNN models we train here work on the latest TensorRT6.0? Is there any python code examples on how to grab the newly trained tlt models, build a TensorRT engine and then run inference on images?

Thank you!

Hi Martin,
For the Jetson platform, the tlt-converter is available to download at https://developer.nvidia.com/tlt-converter

May I know which model do you want to do inference with python code, tlt model or etlt model?

For 2),
Sorry,in the notebook, doing inference with custom python code is not supported now. We only expose the tlt-infer command line to generate inferences on the models.
Secondly,we can support inference in Deepstream with TRT engine or etlt model. The trt engine can be generated by tlt-converter tool against etlt model. The etlt model is generated by tlt-export tool.
How to run inference using the trained FasterRCNN models from TLT is provided on github: GitHub - NVIDIA-AI-IOT/deepstream_4.x_apps: deepstream 4.x samples to deploy TLT training models . The pre-processing and post processing code are already exposed in C++ inside nvdsinfer_customparser_frcnn_uff folder.

For 3)
See above mentioned github, we only tested on TRT 5.1GA. TRT 6.0 should more or less also work but we can’t guarantee that. Currently, we only provide DS sample in GitHub, based on DS 4.0.

Hello Morganh,

Thank you for the reply.

  1. I would like to do inference with the etlt model since this is the exported model. The following TensorRT github repo by Nvidia tells me I can use TLT etlt models with the latest release.
    https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleUffFasterRCNN

Is there any sample python code you have that shows how I can generate a TensorRT engine from an existing etlt model?

2)If I can generate a trt engine with the tlt-converter tool against an etlt model, am I able to then load that engine with python? If so, is there documentation on how to do so?

Thank you for all your support!

Hi Martin,
1)For the Jetson platform, once the tlt-converter is downloaded from dev-zone, you can use it to generate a TensorRT engine against the etlt model. Python code sample is not needed.
For example, tlt-export generated frcnn_kitti_retrain.int8.etlt and cal.bin in INT8 mode. Then below command shows how to generate a TRT engine(INT8).

$ ./tlt-converter frcnn_kitti_retrain.int8.etlt -e frcnn_int8.engine -k <your ngc key> -c cal.bin -o dense_regress/BiasAdd,dense_class/Softmax,proposal -d 3,384,1280 -b 8 -m 4 -t int8 -i nchw
  1. Sorry, there is not in the new released TLT doc.
    See Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation
    We mentioned that a DeepStream sample with documentation on how to run inference using the trained FasterRCNN models from TLT is provided on github at: GitHub - NVIDIA-AI-IOT/deepstream_4.x_apps: deepstream 4.x samples to deploy TLT training models