Hi!
I realize that image normalization in deepstream is controlled by net-scale-factor and offsets. I have seen in sample configs the idea of just using net-scale-factor=0.0039215697906911373, which pretty much defines a division by 255.
However, I have a classic PyTorch normalization in my model: mean= [0.485, 0.456, 0.406], std = [0.229, 0.224, 0.225]. How do I transform this to DeepStream?
Took a look at RetinaNet example (retinanet-examples/infer_config_batch1.txt at main · NVIDIA/retinanet-examples · GitHub), looks that they use net-scale-factor=0.017352074, offsets=123.675;116.28;103.53 with the same PyTorch normalization as I do. Is that correct?