Jetson-inference segnet-console by uff, UffParser: Parser error: Invalid number of Dimensions 0

Hi,

Using Jetson-inference segnet-console by custom uff(model), failed:
UffParser: Parser error: network/input/Placeholder Invalid number of Dimensions 0

How to do ?
all:
nvidia@nvidia-desktop:~/project/jetson-inference/build/aarch64/bin$ ./segnet-console city6.png city6_2.png --prototxt=NULL --model=networks/SAITE_SEG/saite_seg.uff --labels=networks/SAITE_SEG/cityscapes-labels.txt --colors=networks/SAITE_SEG/cityscapes-deploy-colors.txt --input_blob=data \ --output_blob=score_fr

segNet – loading segmentation network model from:
– prototxt: NULL
– model: networks/SAITE_SEG/saite_seg.uff
– labels: networks/SAITE_SEG/cityscapes-labels.txt
– colors: networks/SAITE_SEG/cityscapes-deploy-colors.txt
– input_blob ‘data’
– output_blob ‘score_fr_21classes’
– batch_size 1

[TRT] TensorRT version 5.1.6
[TRT] loading NVIDIA plugins…
[TRT] Plugin Creator registration succeeded - GridAnchor_TRT
[TRT] Plugin Creator registration succeeded - NMS_TRT
[TRT] Plugin Creator registration succeeded - Reorg_TRT
[TRT] Plugin Creator registration succeeded - Region_TRT
[TRT] Plugin Creator registration succeeded - Clip_TRT
[TRT] Plugin Creator registration succeeded - LReLU_TRT
[TRT] Plugin Creator registration succeeded - PriorBox_TRT
[TRT] Plugin Creator registration succeeded - Normalize_TRT
[TRT] Plugin Creator registration succeeded - RPROI_TRT
[TRT] Plugin Creator registration succeeded - BatchedNMS_TRT
[TRT] completed loading NVIDIA plugins.
[TRT] detected model format - UFF (extension ‘.uff’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file networks/SAITE_SEG/saite_seg.uff.1.1.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading networks/SAITE_SEG/saite_seg.uff
[TRT] UffParser: Parser error: network/input/Placeholder: Invalid number of Dimensions 0
[TRT] failed to parse UFF model ‘networks/SAITE_SEG/saite_seg.uff’
[TRT] device GPU, failed to load networks/SAITE_SEG/saite_seg.uff
segNet – failed to initialize.
segnet-console: failed to initialize segnet

Thanks

Hi 730237259, I am not familiar with this saite_seg.uff model or TensorFlow network, so I am not sure of the issue and haven’t tested UFF with segNet code before. I have been using PyTorch to train the new segmentation models with FCN-ResNet18, see here:

https://devtalk.nvidia.com/default/topic/1051696/jetson-nano/how-to-run-schematic-segmentation-samples-in-nano/post/5377005/#5377005
https://github.com/dusty-nv/pytorch-segmentation

The error you see about ‘invalid number of dimensions 0’ for the input may be because this version of function is not being used in segNet code:

https://github.com/dusty-nv/jetson-inference/blob/99ecf1f41b47b6d4f8fb848963f4c6517cfa82c3/c/tensorNet.h#L236

For UFF you need to specify the input dimensions to the parser. See detectNet class for example of doing this, which supports UFF network:

https://github.com/dusty-nv/jetson-inference/blob/99ecf1f41b47b6d4f8fb848963f4c6517cfa82c3/c/detectNet.cpp#L175

Hi,
I used my segNet’s input_blob-- ‘network/input/Placeholder’ – output_blob ‘network/output/ArgMax’,the problem is solution.

But here have new problem: Function not implemented

This is why?

all:
nvidia@nvidia-desktop:~/project/jetson-inference/build/aarch64/bin$ ./segnet-console city6.png city6_2.png --prototxt=NULL --model=networks/SAITE_SEG/saite_seg.uff --labels=networks/SAITE_SEG/cityscapes-labels.txt --colors=networks/SAITE_SEG/cityscapes-deploy-colors.txt --input_blob=network/input/Placeholder --output_blob=network/output/ArgMax

segNet – loading segmentation network model from:
– prototxt: NULL
– model: networks/SAITE_SEG/saite_seg.uff
– labels: networks/SAITE_SEG/cityscapes-labels.txt
– colors: networks/SAITE_SEG/cityscapes-deploy-colors.txt
– input_blob ‘network/input/Placeholder’
– output_blob ‘network/output/ArgMax’
– batch_size 1

[TRT] TensorRT version 5.1.6
[TRT] loading NVIDIA plugins…
[TRT] Plugin Creator registration succeeded - GridAnchor_TRT
[TRT] Plugin Creator registration succeeded - NMS_TRT
[TRT] Plugin Creator registration succeeded - Reorg_TRT
[TRT] Plugin Creator registration succeeded - Region_TRT
[TRT] Plugin Creator registration succeeded - Clip_TRT
[TRT] Plugin Creator registration succeeded - LReLU_TRT
[TRT] Plugin Creator registration succeeded - PriorBox_TRT
[TRT] Plugin Creator registration succeeded - Normalize_TRT
[TRT] Plugin Creator registration succeeded - RPROI_TRT
[TRT] Plugin Creator registration succeeded - BatchedNMS_TRT
[TRT] completed loading NVIDIA plugins.
[TRT] detected model format - UFF (extension ‘.uff’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file networks/SAITE_SEG/saite_seg.uff.1.1.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading networks/SAITE_SEG/saite_seg.uff
Function not implemented
Segmentation fault (core dumped)

Thanks

Did you modify the segNet code to use the UFF overload of LoadNetwork() call and pass in the input dimensions of your network?

UFF models need explicit support to pass input dimension to UFF parser, which hasn’t been added to segNet class object, so see detectNet code for example of how it works. The UFF overload of LoadNetwork() accepts the input dimension for UFF models.

Yeah, I do not modify the segNet code, only in command line input UFFmodel.
I will modify the segNet code by detectNet.
Why not to add UFF parser for segNet/imageNet code? Look forward to your update.
Thank you very much.

I haven’t been using TensorFlow to train segmentation and classification networks, rather I have been using PyTorch to train these models and export them to ONNX with these repos:

detectNet supports the popular SSD-Mobilenet/Inception models which were converted from TensorFlow with AastaNV’s TRT_object_detection tool.

Hi,
The jetson-inference segnet had been undate, that’s very good!

But I using UFF model are failed, hope you can help me:
UFFParser: Parser error: network/shufflenet_encoder/conv1/conv1/layer_conv2d/BiasAdd: The input to the Scale Layer is required to have a minimum of 3 dimensions.

All:
nvidia@nvidia-desktop:~/workspaces/jetson-inference/build/aarch64/bin$ ./segnet-console city_1_1.jpg city_1_1.jpg --prototxt=NULL --model=networks/SAITE_SEG/saite_seg.uff --labels=networks/SAITE_SEG/cityscapes-labels.txt --colors=networks/SAITE_SEG/cityscapes-deploy-colors.txt --input_blob=network/input/Placeholder --output_blob=network/output/Softmax --batch_size 1

segNet – loading segmentation network model from:
– prototxt: NULL
– model: networks/SAITE_SEG/saite_seg.uff
– labels: networks/SAITE_SEG/cityscapes-labels.txt
– colors: networks/SAITE_SEG/cityscapes-deploy-colors.txt
– input_blob ‘network/input/Placeholder’
– output_blob ‘network/output/Softmax’
– batch_size 1

[TRT] TensorRT version 5.0.6
[TRT] loading NVIDIA plugins…
[TRT] completed loading NVIDIA plugins.
[TRT] detected model format - UFF (extension ‘.uff’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file networks/SAITE_SEG/saite_seg.uff.1.1.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading networks/SAITE_SEG/saite_seg.uff
[TRT] network/shufflenet_encoder/conv1/conv1/layer_conv2d/Conv2D: image size is smaller than filter size
[TRT] UFFParser: Parser error: network/shufflenet_encoder/conv1/conv1/layer_conv2d/BiasAdd: The input to the Scale Layer is required to have a minimum of 3 dimensions.
[TRT] failed to parse UFF model ‘networks/SAITE_SEG/saite_seg.uff’
[TRT] device GPU, failed to load networks/SAITE_SEG/saite_seg.uff
segNet – failed to initialize.
segnet-console: failed to initialize segnet

Hi,

UFFParser: Parser error: network/shufflenet_encoder/conv1/conv1/layer_conv2d/BiasAdd: The input to the Scale Layer is required to have a minimum of 3 dimensions.

This looks like there are some unexpected behavior inside your model.
Would you mind to check if your uff model can be executed with TensorRT without error first?

cd /usr/src/tensorrt/bin/
./trtexec --uff= ...

Thanks.