FCN ResNet18 - MHP - 512 x320 the Pre - Trained Segmentation Models to test the effect is not obvious, only color a little dark
Hi,
Do you use the similar input source as shared in this Github:
<img src="https://github.com/dusty-nv/jetson-inference/raw/master/docs/images/deep-vision-header.jpg" width="100%">
<p align="right"><sup><a href="detectnet-example-2.md">Back</a> | <a href="segnet-camera-2.md">Next</a> | </sup><a href="../README.md#hello-ai-world"><sup>Contents</sup></a>
<br/>
<sup>Semantic Segmentation</sup></s></p>
# Semantic Segmentation with SegNet
The next deep learning capability we'll cover in this tutorial is **semantic segmentation**. Semantic segmentation is based on image recognition, except the classifications occur at the pixel level as opposed to the entire image. This is accomplished by *convolutionalizing* a pre-trained image recognition backbone, which transforms the model into a [Fully Convolutional Network (FCN)](https://arxiv.org/abs/1605.06211) capable of per-pixel labeling. Especially useful for environmental perception, segmentation yields dense per-pixel classifications of many different potential objects per scene, including scene foregrounds and backgrounds.
<img src="https://github.com/dusty-nv/jetson-inference/raw/pytorch/docs/images/segmentation.jpg">
[`segNet`](../c/segNet.h) accepts as input the 2D image, and outputs a second image with the per-pixel classification mask overlay. Each pixel of the mask corresponds to the class of object that was classified. [`segNet`](../c/segNet.h) is available to use from [Python](https://rawgit.com/dusty-nv/jetson-inference/pytorch/docs/html/python/jetson.inference.html#segNet) and [C++](../c/segNet.h).
As examples of using the `segNet` class, we provide sample programs C++ and Python:
- [`segnet.cpp`](../examples/segnet/segnet.cpp) (C++)
- [`segnet.py`](../python/examples/segnet.py) (Python)
These samples are able to segment images, videos, and camera feeds. For more info about the various types of input/output streams supported, see the [Camera Streaming and Multimedia](aux-streaming.md) page.
See [below](#pretrained-segmentation-models-available) for various pre-trained segmentation models available that use the FCN-ResNet18 network with realtime performance on Jetson. Models are provided for a variety of environments and subject matter, including urban cities, off-road trails, and indoor office spaces and homes.
This file has been truncated. show original
You may need to fine-turn the model if the use case have something different.
Thanks.
You might want to increase the alpha value by running the segnet program with --alpha=200
or some higher value. The default value is --alpha=120
. Increasing it will make the effect more noticable.
Thank you for your reply, I used the model provided by the official website。
Original image of semantic segmentation input:
Semantic segmentation output picture:
Is this result correct?
You probably want to test the MHP model first on the humans_*.jpg
images (i.e. images/human_0.jpg
). The MHP models are fairly low-res, so they may have trouble picking up the smaller people in the city image that you tried. You could try that image with Cityscapes model though.