Semantic Segmentation - Border

Hi friends,

Now I am working with semantic segmentation to detect the quality of apples in trees.
I have a question. How can I improve the segmentation edge.
I have checked in the jetson inference docs folder and I could see this image that I found interesting and I would like to get to that or improve.

However, in the general example we see images of this type, where there is no exactness at the edge of the segmented image.

I currently have results like this:



Apples that are painted red are high quality, blue ones are medium quality, and green apples are poor quality.

What can I do to improve the edge lining of the apples?

Hi,

That’s because the resolution of the output mask is not high enough.
You can find the resolution information of each model below:

Thanks.

Hi,

In the case of apples, perform a retraining to a new database and not using the existing ones.
Should I carry out the labeling of images with higher resolution, and then the training?
How many images is it recommended to label for each type of apple?
What would be the recommendation to improve the output mask result??

Hi,

The output resolution is related to the network architecture.
Do you retrain it with a model shared in the above link?
If yes, please share which model do you use.

Thanks.

Hi,

Hello there,

I am using the tutorial:

and using fcn_resnet18…

Hello there,

Please can you support me by indicating if I should make any modifications to improve the edge?

Hi,

The tutorial uses fcn-resnet18-voc-320x320 which indicates the output mask is 320x320.
So you will get a blurred mask when upscaling it into a standard image size, ex. 1920x1080.

In the link shared above, there is a 2048x1024 resolution model called fcn-resnet18-cityscapes-2048x1024.
Would you mind retraining the model for your use case?

You

Hi,

I attach the options that python train.py gives me - h
In these it can be seen that said model does not appear.

Which one should I use?

(post deleted by author)