Custom Model applied on Deepstream

Description

I introduce a custion model developed by python into deepstream gst-infer, but the output value of the gst-infer isn’t the same as python code:
cap = cv2.VideoCapture(“output1.mp4”) #cv2 getting the camera
rval, frame = cap.read()
if rval == False:
break
img = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)# image to RGB
img_ = Image.fromarray(img)
img_transforms = transforms.Compose([
transforms.Resize((288, 800)),
transforms.ToTensor(),
transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),
])
imgs = img_transforms(img_)
imgs = imgs.unsqueeze(0)
imgs = imgs.cuda()
with torch.no_grad():
out = module_net(imgs)

How

I think, maybe the imgs input into “module_net(imgs)” isn’t the same as gst-pipeline, so, should I add a pre-processor for the input image in gst-infer as img_transforms? If yes, how?

Please provide complete information as applicable to your setup, and please also share more details about the different results from Pytorch and DeepStream. Did you set net-scale-factor in DeepStream config file?

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

1 Like

Yeah, thanks! I misunderstood this parameter.
I solved this problem by setting the net-scale-factor!

Is this still an issue to support? Thanks

Thanks! This issue can be closed!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.