PyTorch normalization in Deepstream config

Hi!

I realize that image normalization in deepstream is controlled by net-scale-factor and offsets. I have seen in sample configs the idea of just using net-scale-factor=0.0039215697906911373, which pretty much defines a division by 255.

However, I have a classic PyTorch normalization in my model: mean= [0.485, 0.456, 0.406], std = [0.229, 0.224, 0.225]. How do I transform this to DeepStream?

Took a look at RetinaNet example (https://github.com/NVIDIA/retinanet-examples/blob/master/extras/deepstream/deepstream-sample/infer_config_batch1.txt), looks that they use net-scale-factor=0.017352074, offsets=123.675;116.28;103.53 with the same PyTorch normalization as I do. Is that correct?

Hi,

The normalization equation used in Deepstream looks like this:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html

y = net-scale-factor*(x-offsets)

The net-scale-factor is used in the same way as the mean value in pyTorch.
However, we don’t have a configure parameter for std.

Here is a discussion by calculating the corresponding mean and offset value via std.
It’s recommended to check it first:

Thanks.

Hi,

but I don’t understand the calculations:

If I have mean= [0.485, 0.456, 0.406], std = [0.229, 0.224, 0.225], then I can set average value for std, e.g. 0.226, and calculate net-scale-factor = 1/128/0.578* 0.226 = 0.0030547145328
But what to do with mean values [0.485, 0.456, 0.406]? Should I add offsets parameter?

Hi rostislav.etc,

Please help to open a new topic for your issue. Thanks