Custom Model applied on Deepstream

Description

I introduce a custion model developed by python into deepstream gst-infer, but the output value of the gst-infer isn’t the same as python code:
cap = cv2.VideoCapture(“output1.mp4”) #cv2 getting the camera
rval, frame = cap.read()
if rval == False:
break
img = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)# image to RGB
img_ = Image.fromarray(img)
img_transforms = transforms.Compose([
transforms.Resize((288, 800)),
transforms.ToTensor(),
transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),
])
imgs = img_transforms(img_)
imgs = imgs.unsqueeze(0)
imgs = imgs.cuda()
with torch.no_grad():
out = module_net(imgs)

Environment

I think, maybe the imgs input into “module_net(imgs)” isn’t the same as gst-pipeline, so, should I add a pre-processor for the input image in gst-infer as img_transforms? If yes, how?

Hi,

This looks like a Deepstream related issue. We will move this post to the Deepstream forum.

Thanks!

Thanks! I have made a new topic in Deepstream forum.
https://forums.developer.nvidia.com/t/custom-model-applied-on-deepstream/238266

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.