Wich preprocess for faster rcnn


I have trained a faster rcnn resnet50 model with transfer learning toolkit, when I run the tlt-infer command I get good results.

Now my objective is to implement the model in my own C++ application which includes tensorrt.
In order to do this, I have converted my model (.etlt) to a tensorrt file (.engine) using the tlt-converter command.

I’m able to run the model in my C++ application without crash, however the result are differents from ones obtain with tlt-infer command. The results are consistent but different from those obtained with the command, some detections are missing and the boxes found are a little different from those given by the tlt-infer command.

I supposed that my mistake is in the pre or post processing steps.
For the pre processing step I simply carry out a resize, a mean substraction and scale factor as specified in my spec.txt file.
For the post processing I have used the code given in the deepstream custom app (given in the “Get started” tutorial).

I’m not able to find any example for the pre-processing step used in tlt-infer.

Is there any extra steps in the preprocessing step of tlt-infer that is not tuned in the spec file?

Besides this, I have no problem with SSD model trained with TLT and implemented in my C++ application.


deepstream custom app: https://github.com/NVIDIA-AI-IOT/deepstream_4.x_apps/blob/master/nvdsinfer_customparser_frcnn_uff/nvdsinfer_custombboxparser_frcnn_uff.cpp

Hi Steventel,
You mentioned that you get correct result in SSD but not in Faster-rcnn.
So I assume you meet the same issue as https://devtalk.nvidia.com/default/topic/1069113/?comment=5415841
Please refer to it and have a try. Thanks.

Update, there is no more preprocessing.
Also please note that the image channel ordering should be BGR not RGB. Please check your post processing if it is correct.
More, there is official DS sample, you can refer to it.