Why we need image pre and post processing when deploying a tlt model by using TensorRT?

Hi all.
system info (GPU, tlt version 2)
I know I should convert my tlt model into TensorRT, but I can’t get it that why we need to do image pre-processing and post-process on the output after inference?

any example that could help me to find out the need of image pre-processing and post-process on the output will be appreciated.

In TLT, by default, you need not write any pre or post processing.
If you run inference via tlt-infer, the tlt-infer can work for you.
If you run inference via deepstream, it also set everything ok.

But if you want to run inference via your own code, they are needed.

1 Like

Thank you @Morganh, for your reply

Hi all and Dear @Morganh.
I did tlt-infer on a video test and my model is a ssd object detection model. it’s done perfectly.
But I also, need to write my python code for inference part.
how should I know which post and pre processing steps should be taken? Is there any document for that?
(for example my ssd model backbone is mobilenet_v1, so 1 step is to normalize frames based on mobilenetv1, what else I should do ?)

Can someone help me?

Refer to postprocessing in https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/blob/master/post_processor/nvdsinfer_custombboxparser_tlt.cpp

Refer to preprocessing in https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/blob/master/configs/ssd_tlt/pgie_ssd_tlt_config.txt
TLT the pre-processing of SSD image is like below:

  • assume RGB input values in range from 0.0 to 255.0 as float
  • change from RGB to BGR
  • then subtract channels of input values by 103.939;116.779;123.68 separately for BGR channels.
1 Like

Thanks @Morganh