I am using SSD with TLT 2.0, does the input images get subtracted by the image’s mean so they become normalized before training?
In TLT the preprocessing of SSD image is like below:
- assume RGB input values which range from 0.0 to 255.0 as float
- change from RGB to BGR
- then subtract channels of input values by 103.939;116.779;123.68 separately for BGR channels.