I’ve trained SSD MobileNet V1 model using transfer learning toolkit. I checked trained model using tlt-evaluate tool and results are ok.
Now I’m trying to run inference using custom code (based on the TensorRT Developer Guide examples). I expected that range for the color channel values of input images must be in [-1.0,1.0] range. But results were poor. I tried to scale pixel values to [0.0,1.0] but it didn’t change anything. At last I was able to get good results passing values in [0.0,255.0] range. So my question is: what is the expected range of input color channel values for tlt-trained SSD MobileNet v1 model? Is it [0.0,255.0], or is it a problem in my inference code?
In TLT the pre-processing of SSD image is like below:
assume RGB input values in range from 0.0 to 255.0 as float
change from RGB to BGR
then subtract channels of input values by 103.939;116.779;123.68 separately for BGR channels.
Please make sure you can run successfully with Deepstream and get correct bbox, then try to implement your own standalone code by checking pre-processing/post-processing in DS samples.