Description
I introduce a custion model developed by python into deepstream gst-infer, but the output value of the gst-infer isn’t the same as python code:
cap = cv2.VideoCapture(“output1.mp4”) #cv2 getting the camera
rval, frame = cap.read()
if rval == False:
break
img = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)# image to RGB
img_ = Image.fromarray(img)
img_transforms = transforms.Compose([
transforms.Resize((288, 800)),
transforms.ToTensor(),
transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),
])
imgs = img_transforms(img_)
imgs = imgs.unsqueeze(0)
imgs = imgs.cuda()
with torch.no_grad():
out = module_net(imgs)
Environment
I think, maybe the imgs input into “module_net(imgs)” isn’t the same as gst-pipeline, so, should I add a pre-processor for the input image in gst-infer as img_transforms? If yes, how?