Replicating PyTorch image pre-processing into TRT DeepStream

I recently posted this post on the TensorRT forums and was referred here.

Currently my I think that the problem is due to the different image pre-processing from the PyTorch model and when running on TRT.
How do I find the values I use in my spec file for net-scale-factor, offsets or other variables to best replicate the image preprocessing used in the PyTorch program?

• Hardware Platform (Jetson / GPU) = Jetson
• DeepStream Version = 5.1
• JetPack Version (valid for Jetson only) = jetson-nano-jp451-sd-card-image
• TensorRT Version = 7.1.3-1
• Issue Type( questions, new requirements, bugs) = questions

Thanks

Hi,

Please find below’s topic for the information:

Thanks.

Hi,

Unfortunately, when I try to use those values I get outputs that are much larger than expected and are not normalised as they should be.

I am trying to replicate this from torchvision:

torchvision.transforms.Compose([torchvision.transforms.ToTensor(),
                                           torchvision.transforms.Normalize(mean=[0.43476477, 0.44504763, 0.43252817],
                                                                            std=[0.20490805, 0.19712372, 0.20312176]),
                                           torchvision.transforms.Resize(540, 960)
                                           ])  # normalize to (-1, 1)

Thanks

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Hi,

Could you share the details about your preprocessing so we can check?
Thanks.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.