TX2 use jetson-inference segnet-console is horrible. why?

TX2 jetpack4.2.1 jetson-inference
Use segnet-console(FCN-Alexnet-Cityscapes-SD/HD)
For example; ./segnet-console city1.png output1.png network=FCN-Alexnet-Cityscapes-SD/HD
it can shutdowm complete! but the result picture(output1.png) is horrible, can not see clear outline for car\road\person… why?

Hi,

Would you mind to share the output1.png with us?
Thanks.

[https:][//blog.csdn.net/qq_41587270/article/details/98209804]
hi, this is pictures for test segnet-console output1.png .
thanks

https://blog.csdn.net/qq_41587270/article/details/98209804
hi, this is pictures for test segnet-console output1.png .
thanks

could you test the models(FCN-Alexnet-Cityscapes-SD/HD,FCN-Alexnet-Synthia-summer-SD/HD) in tx2 before nowdays?

Hi,

The default rendering mode is linear combination with the confidence of classes.
So the output looks correct to me.

You can try to set the overlay function into point to see if what you want.
https://github.com/dusty-nv/jetson-inference/blob/master/examples/segnet-console/segnet-console.cpp#L103

if( !net->Overlay(outCUDA, imgWidth, imgHeight, segNet::FILTER_POINT) )
...

Thanks.

Thanks Aasta. Also note that I am moving over to FCN-ResNet18 models, which is trained with PyTorch and gets better accuracy and performance. I hope to release it in the next couple of weeks, so stay tuned.

Hi,

I am trying to test ‘if( !net->Overlay(outCUDA, imgWidth, imgHeight, segNet::FILTER_POINT)’, the results has improved than ‘FILTER_LINEAR’, but results is not good too.
output picturea:
https://blog.csdn.net/qq_41587270/article/details/98480172

thanks

Hi,

Did you mean the accuracy?

Please noticed that our sample targets for demonstrating how to use our high performance library.
Since all the models are available on the GitHub, you can refine the model to have a better accuracy.

Thanks.

OK

Thanks

Hi,

The jetson-inference segnet-console is good, I can use it to test by using caffmodel.
But I want to use uff(model) to test, How to do it can use uff(model)?
Becasue I just start learning, having the problem, sorry.

Thanks

nvidia@nvidia-desktop:~/project/jetson-inference/build/aarch64/bin$ ./segnet-console city6.png city6_2.png --prototxt=NULL --model=networks/SAITE_SEG/saite_seg.uff --labels=networks/SAITE_SEG/cityscapes-labels.txt --colors=networks/SAITE_SEG/cityscapes-deploy-colors.txt --input_blob=data \ –output_blob=score_fr

segNet – loading segmentation network model from:
– prototxt: NULL
– model: networks/SAITE_SEG/saite_seg.uff
– labels: networks/SAITE_SEG/cityscapes-labels.txt
– colors: networks/SAITE_SEG/cityscapes-deploy-colors.txt
– input_blob ‘data’
– output_blob ‘score_fr_21classes’
– batch_size 1

[TRT] TensorRT version 5.1.6
[TRT] loading NVIDIA plugins…
[TRT] Plugin Creator registration succeeded - GridAnchor_TRT
[TRT] Plugin Creator registration succeeded - NMS_TRT
[TRT] Plugin Creator registration succeeded - Reorg_TRT
[TRT] Plugin Creator registration succeeded - Region_TRT
[TRT] Plugin Creator registration succeeded - Clip_TRT
[TRT] Plugin Creator registration succeeded - LReLU_TRT
[TRT] Plugin Creator registration succeeded - PriorBox_TRT
[TRT] Plugin Creator registration succeeded - Normalize_TRT
[TRT] Plugin Creator registration succeeded - RPROI_TRT
[TRT] Plugin Creator registration succeeded - BatchedNMS_TRT
[TRT] completed loading NVIDIA plugins.
[TRT] detected model format - UFF (extension ‘.uff’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file networks/SAITE_SEG/saite_seg.uff.1.1.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading networks/SAITE_SEG/saite_seg.uff
[TRT] UffParser: Parser error: network/input/Placeholder: Invalid number of Dimensions 0
[TRT] failed to parse UFF model ‘networks/SAITE_SEG/saite_seg.uff’
[TRT] device GPU, failed to load networks/SAITE_SEG/saite_seg.uff
segNet – failed to initialize.
segnet-console: failed to initialize segnet

Hi 730237259, please see here for response in your other post:

[url]https://devtalk.nvidia.com/default/topic/1062181/jetson-tx2/jetson-inference-segnet-console-by-uff-uffparser-parser-error-invalid-number-of-dimensions-0/post/5378829/#5378829[/url]