Depending on your dataset you may also be able to write a simple OpenCV tool to perform a rough labeling by color or texture, like is included with this drone dataset from the tutorial.
Hi there,
so now that the TensorRT support for deconv layers is out and decent I wanted to test it out with the segNet code.
So I used the original deploy.prototxt and changed the SEGNET_DEFAULT_OUTPUT to “upscore_21classes”. After the initial test I got something like this:
To see the output of classes, and probably this would resemble the output in Digits, but it didn’t, it was just the same pixelated output, you can see it clearer by checking these two sections:
That one is the output after the bilinear interpolation and this is the output after the deconvolution layer:
I guess it’s a small thing I’m missing out, so any help is appreciated, I’ll try to deploy my own network, probably it’s something regarding that, or could it be something in the argmax?
Edit, it worked with a different network, so probably that was it. I’m attaching a picture.