Hi all.
system info (GPU, tlt version 2)
I know I should convert my tlt model into TensorRT, but I can’t get it that why we need to do image pre-processing and post-process on the output after inference?
any example that could help me to find out the need of image pre-processing and post-process on the output will be appreciated.
In TLT, by default, you need not write any pre or post processing.
If you run inference via tlt-infer, the tlt-infer can work for you.
If you run inference via deepstream, it also set everything ok.
But if you want to run inference via your own code, they are needed.
Hi all and Dear @Morganh.
I did tlt-infer on a video test and my model is a ssd object detection model. it’s done perfectly.
But I also, need to write my python code for inference part.
how should I know which post and pre processing steps should be taken? Is there any document for that?
(for example my ssd model backbone is mobilenet_v1, so 1 step is to normalize frames based on mobilenetv1, what else I should do ?)