FCN ResNet18 - MHP - 512 x320 the Pre - Trained Segmentation Models to test the effect is not obvious, only color a little dark
Hi,
Do you use the similar input source as shared in this Github:
<img src="https://github.com/dusty-nv/jetson-inference/raw/master/docs/images/deep-vision-header.jpg">
<p align="right"><sup><a href="detectnet-example-2.md">Back</a> | <a href="segnet-camera-2.md">Next</a> | </sup><a href="../README.md#hello-ai-world"><sup>Contents</sup></a>
<br/>
<sup>Semantic Segmentation</sup></s></p>
# Semantic Segmentation with SegNet
The next deep learning capability we'll cover in this tutorial is **semantic segmentation**. Semantic segmentation is based on image recognition, except the classifications occur at the pixel level as opposed to the entire image. This is accomplished by *convolutionalizing* a pre-trained image recognition backbone, which transforms the model into a [Fully Convolutional Network (FCN)](https://arxiv.org/abs/1605.06211) capable of per-pixel labelling. Especially useful for environmental perception, segmentation yields dense per-pixel classifications of many different potential objects per scene, including scene foregrounds and backgrounds.
<img src="https://github.com/dusty-nv/jetson-inference/raw/pytorch/docs/images/segmentation.jpg" width="900">
`segNet` accepts as input the 2D image, and outputs a second image with the per-pixel classification mask overlay. Each pixel of the mask corresponds to the class of object that was classified. `segNet` is available to use from [Python](https://rawgit.com/dusty-nv/jetson-inference/pytorch/docs/html/python/jetson.inference.html#segNet) and [C++](../c/segNet.h).
As examples of using `segNet` we provide versions of a command-line interface for C++ and Python:
- [`segnet-console.cpp`](../examples/segnet-console/segnet-console.cpp) (C++)
- [`segnet-console.py`](../python/examples/segnet-console.py) (Python)
Later in the tutorial, we'll also cover segmentation on live camera streams from C++ and Python:
- [`segnet-camera.cpp`](../examples/segnet-camera/segnet-camera.cpp) (C++)
This file has been truncated. show original
You may need to fine-turn the model if the use case have something different.
Thanks.
You might want to increase the alpha value by running the segnet program with --alpha=200
or some higher value. The default value is --alpha=120
. Increasing it will make the effect more noticable.
Thank you for your reply, I used the model provided by the official website。
Original image of semantic segmentation input:
Semantic segmentation output picture:
Is this result correct?
You probably want to test the MHP model first on the humans_*.jpg
images (i.e. images/human_0.jpg
). The MHP models are fairly low-res, so they may have trouble picking up the smaller people in the city image that you tried. You could try that image with Cityscapes model though.