Config file settings for custom pytorch pre-process transform

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) x86 RTX-3060
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only) N/A
• TensorRT Version 8.2.5.1
• NVIDIA GPU Driver Version (valid for GPU only) 525.125.06
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I am working on custom pre-processing transform implemented in PyTorch inference module, which is producing correct results for the given input tensor, however when converted the PyTorch model to native trt engine it’s not producing the equivalent results, there appears to be issue with the way deepstreem handles the pre-processing, below is PyTorch pre-processing function,


def data_transform(model):
    # transforms needed for shufflenet
    if model == 'shufflenet':
        np_transforms = transforms.Compose([
            transforms.ToTensor(),
            transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))
        ])
    return np_transforms

##########################################################################

# read/process image and apply transformation


def read_img(frame, np_transforms):
    small_frame = cv2.resize(frame, (224, 224), cv2.INTER_AREA)
    small_frame = cv2.cvtColor(small_frame, cv2.COLOR_BGR2RGB)
    small_frame = Image.fromarray(small_frame)
    small_frame = np_transforms(small_frame).float()
    small_frame = small_frame.unsqueeze(0)
    small_frame = small_frame.to(device)

    return small_frame

Below is config file settings for the sgie classifier , but it’s not producing the correct results as that of the PyTorch inference.

[property]
gpu-id=0
net-scale-factor=0.007843137255
model-engine-file=../weights/tensorrt/shufflenet_fp16_rtx-3050.engine
#batch-size=1
# 0=NCHW, 1=NHWC, 2=CUSTOM
network-input-order=0
# 0=FP32 and 1=INT8 mode
network-mode=2
num-detected-classes=1
process-mode=2
model-color-format=0
gie-unique-id=5
operate-on-gie-id=1
operate-on-class-ids=80
is-classifier=1
output-blob-names=output
classifier-async-mode=0
classifier-threshold=0.10
#input-dims=3;80;160
process-mode=2
parse-classifier-func-name=NvDsInferParseCustomNVFire
custom-lib-path=../plugins/libs/libnvdsanalytics_fire.so
#scaling-filter=0
#scaling-compute-hw=0

[class-attrs-all]
# Define pre-processing parameters for 'shufflenet'
pre-processing-parameters=0,0,0,0;0,0,0,0

NOTE: The post-processing function implemented for deepstreem is working correctly as the output of the notwork is single output value with sigmoid activation, hence it doesn’t looks to be an issue with the post-processing module. The Model is tested with FP16 and FP32 precision’s but the results are almost same in both the cases.

How much the difference is?

It’s relatively high, but what i can observe it’s around ~35-40% , As i can notice net-scale-factor is crucial here, i tried changing the value for this parameter and result largely gets influenced with the value mentioned for this parameter.

The net-scale-factor should be exactly the same as the pre-processing for trainning. gst-nvinfer internal pre-process does not support standard deviation now.

How did you calcuate the net-scale-factor from the deviation in the pytorch code?

Please read the pytorch code and make sure you have get correct paramters from the code. torchvision.transforms — Torchvision master documentation (pytorch.org)

As i really don’t know how to compute the net-scale-factor from the torch based pre-processing , how could i derive the correct net-scale-factor from
transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This is pytorch API, the formula is provided by pytorch. Please refer to torchvision.transforms — Torchvision master documentation (pytorch.org)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.