Hello,
I am new to Jetson and trying to use semantic segmentation to filter out unnecessary data (in my case, walls and floors) from a webcam video stream. The data processing part of my script is shown below, using OpenCV and SegNet:
net = jetson.inference.segNet("fcn-resnet18-sun-512x400")
_,image = camera.read()
img = cv2.cvtColor(image, cv2.COLOR_BGR2RGBA).astype(np.float32)
img_input = jetson.utils.cudaFromNumpy(img)
net.Process(img_input)
net.Mask(img_input)
jetson.utils.cudaDeviceSynchronize()
img_cv2 = jetson.utils.cudaToNumpy(img_input)
img_cv2 = cv2.cvtColor(img_cv2, cv2.COLOR_RGBA2BGR).astype(np.float32)
out.write(np.uint8(img_cv2))
cv2.imshow('OpenCV Output', img_cv2/255)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
I want to retrieve the parts of the image that are not classed as walls or floors (classes 2 and 3?) so I can apply another algorithm to what’s left.
Clearly there must be somewhere where the changes to the image are stored, because the cv2.imshow of img_cv2 displays the mask with different class colors. I’m not sure how to access it.
Ideally I would have something similar to the .Detection method for DetectNet. I can’t seem to find anything similar for SegNet in the documentation.
Thanks!