Isolating data from images using Segnet


I am new to Jetson and trying to use semantic segmentation to filter out unnecessary data (in my case, walls and floors) from a webcam video stream. The data processing part of my script is shown below, using OpenCV and SegNet:

     net = jetson.inference.segNet("fcn-resnet18-sun-512x400")
    _,image =
    img = cv2.cvtColor(image, cv2.COLOR_BGR2RGBA).astype(np.float32)
    img_input = jetson.utils.cudaFromNumpy(img)
    img_cv2 = jetson.utils.cudaToNumpy(img_input)
    img_cv2 = cv2.cvtColor(img_cv2, cv2.COLOR_RGBA2BGR).astype(np.float32)
    cv2.imshow('OpenCV Output', img_cv2/255)
    if cv2.waitKey(1) & 0xFF == ord('q'):

I want to retrieve the parts of the image that are not classed as walls or floors (classes 2 and 3?) so I can apply another algorithm to what’s left.

Clearly there must be somewhere where the changes to the image are stored, because the cv2.imshow of img_cv2 displays the mask with different class colors. I’m not sure how to access it.

Ideally I would have something similar to the .Detection method for DetectNet. I can’t seem to find anything similar for SegNet in the documentation.



Sorry for the late update.

Have you fixed this issue already?
If not, please check if following discussion can meet you requirement:



Thank you! Using the code from that thread, I can get a (13,16,1) numpy array of the class IDs. I think that will be all I need. The one change I had to make was changing the format in jetson.utils.cudaAllocMapped() to ‘gray8’. When I used the frame format, I got an array of the class colors. I think that will be all I need for this process.

Thank you!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.