Hello,
I have trained a Resnet classifier in PyTorch and exported to an ONNX model.
The FC layer of Resnet18 is set to:
model.fc = nn.Sequential(
nn.Linear(model.fc.in_features, args.n_classes, bias=False),
nn.Softmax(1),
)
These are the normalization values I have used.
normalize = transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
The model accuracy goes up to 80%, but when I infer (using nvinfer) on the exact same images used in training, the result is very different.
I have used the following Offsets & scale factor
# RGB, torchvision = 255*[0.485;0.456;0.406]
offsets=123.675;116.28;103.53
maintain-aspect-ratio=1
#net-scale-factor=0.003921569
net-scale-factor=0.01735207357
With that in mind, I wanted to know
- How to set the correct offsets and net-scale-factor to match my training normalization values?
- Is there any place where I can find some examples of how nvinfer does the asymmetric padding when
maintain-aspect-ratio=1
- Is there any official documentation to use a pytorch model as an SGIE in Deepstream?
Thanks.