Semantic Segmentation_Outdoor Navigation with Segnet and Jetson Nano


may be my question already answered, but unfortunately I didn’t find is on the forum.
I’m using “Running the Live Camera Segmentation Demo” on Jetson Nano from here (dusty-nv):

The program is working & great.
I would like to take to the next level and use for simple navigation.
As a logical step I would like to take out one particular horizontal line (for example: y=50, x=0…100) from the mask image and use for navigation purpose.

I’m usign Python with “DeepScene” dataset:
./ --network=fcn-resnet18-deepscene /dev/video0

I assume I have to modify in this code and take out the pixel information:
After generating the mask in row# 83-85:

	if buffers.mask:
		net.Mask(buffers.mask, filter_mode=opt.filter_mode)

The segmented picture stored here: buffers.mask

How can I access the pixels & colors ? Especially the brown color for the ‘trail’:
DeepScene Classes

or just get the class IDs (for example number ‘0’ for the ‘trail’) in a particular horizontal line ?

(Just for into, the planned navigation process: If in a particular line on the left side are more brown pixel than on the right, then I can instruct the robot to turn left.)

I hope I was clear. If not, please get in touch.
Thank You very much for Your help in advance.
Have a nice day ahead :-)

With Regards,


You can find the segmentationBuffers source code at the below file.

As above, you can see if the use_stats is true, the cudaToNumpy will be called.
So you will have the mask which stored as numpy array at buffers.class_mask_np.


The code that Aasta links to uses class ID mask instead of the color mask. This will be easier for you to interpret the class ID’s instead of the colors.

To get the class ID mask, allocate a single-channel image in gray8 format and then call segNet.Mask() with it:

# see for example
grid_width, grid_height = net.GetGridSize()
class_mask = jetson.utils.cudaAllocMapped(width=self.grid_width, height=self.grid_height, format="gray8")
class_mask_np = jetson.utils.cudaToNumpy(self.class_mask)
net.Mask(class_mask, grid_width, grid_height)
# you can now access the class ID's from class_mask_np

Hi Dusty,

Your script worked ! Exactly, as I wised ! Thank You very much for Your help (Aasta as well)!

Just info for others:
I added in this file the above code and in the next row, just:

It gave back the class IDs in a 18x10 matrix :-)

Is it possible, for example double the resolution of the mask/matrix (to have more accurate navigation) ?
If yes, I will open an other topic.

Thank You again !

To get a truly larger class ID mask/matrix, you would need to use a larger model (for example, fcn-resnet18-deepscene-864x480 will have larger mask than fcn-resnet18-deepscene-576x320). The dimensions of the output class mask scale with the input dimension. The performance scales also, so larger models take longer to process.

I believe that the 864x480 size was the largest that I could train the DeepScene model, because of the resolution of the DeepScene dataset itself.

Alternatively, if you pass in a matrix to segNet.Mask() that is larger than the raw output, it will use interpolation to upsample the data. In the case of the colorized mask/overlay, this does make the image look better. However in the case of the class ID mask, upsampling the raw data isn’t actually making more useful data for you to navigate from, and only increases your processing demands on that class mask data. So for navigation purposes, I recommend just sticking with the raw size of the class mask.

Dear Dusty,

thank You very much for Your prompt answer and guidance.
I will try out and open a new topic accordingly.

With Regards,