How to do inference with a TLT faster rcnn model?

Hello everyone,

I have trained a frcnn_resnet18 model with transfer learning toolkit using the docker downloaded from NGC on my host machine.

I can do inference with the deepstream custom app given in the IVA Getting Started Guide, it’s seems to work well on the nano.

My objective now is to run inference with only tensorrt, for this I use the tensorrt sample wich works well with faster rcnn models trained with tensorflow or caffe (and optimized for tensorrt with a uff parser).

The inference of a SSD model trained with TLT and converted to a TRT engine is succefully executed, but it’s not working for a Faster RCNN model: The inference is running but the outputs of the network are weird, the position of the bouding boxes are always between 1 and 3.

Is the post-processing of a Faster RCNN model trained with TLT differents?

Deepstream custom app: GitHub - NVIDIA-AI-IOT/deepstream_4.x_apps: deepstream 4.x samples to deploy TLT training models

Used Tensorrt sample: https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleFasterRCNN

Hi,

In TLT the RoI coordinates are (y1, x1, y2, x2), while in Caffe, it is (x1, y1, x2, y2).
Please refer below link for more details:
https://devtalk.nvidia.com/default/topic/1069113/transfer-learning-toolkit/-how-to-do-inference-with-a-tlt-faster-rcnn-model-/post/5415841/#5415841

Thanks