I have trained a faster rcnn resnet50 model with transfer learning toolkit, when I run the tlt-infer command I get good results.
Now my objective is to implement the model in my own C++ application which includes tensorrt.
In order to do this, I have converted my model (.etlt) to a tensorrt file (.engine) using the tlt-converter command.
I’m able to run the model in my C++ application without crash, however the result are differents from ones obtain with tlt-infer command. The results are consistent but different from those obtained with the command, some detections are missing and the boxes found are a little different from those given by the tlt-infer command.
I supposed that my mistake is in the pre or post processing steps.
For the pre processing step I simply carry out a resize, a mean substraction and scale factor as specified in my spec.txt file.
For the post processing I have used the code given in the deepstream custom app (given in the “Get started” tutorial).
I’m not able to find any example for the pre-processing step used in tlt-infer.
Is there any extra steps in the preprocessing step of tlt-infer that is not tuned in the spec file?
Besides this, I have no problem with SSD model trained with TLT and implemented in my C++ application.