Setnet input/output blobs

I’m trying to play around with segnet and cant get the examples to work. I’ve tried several but here is one of the commands.

/segnet.py --network=/home/keith/jetson-interface-old/data/networks/FCN-ResNet18-Cityscapes-1024x512/fcn_resnet18.onnx /mnt/cifs/NAS/ImageTraining/Testing/*.jpg /mnt/cifs/NAS/ImageTraining/Testing/output_%i.jpg
The error i get is

“[TRT] INVALID_ARGUMENT: Cannot find binding of given name: data
[TRT] failed to find requested input layer data in network
[TRT] device GPU, failed to create resources for CUDA engine
[TRT] failed to create TensorRT engine for /home/keith/jetson-interface-old/data/networks/FCN-ResNet18-Cityscapes-1024x512/fcn_resnet18.onnx, device GPU
[TRT] segNet – failed to load.
jetson.inference – segNet failed to load network”

I’m not sure what the input/output blobs are for. I can provide values for them but not sure what to provide. Any suggestions?

Hi,

The segNet.py has some predefined path.

For example, we have fcn_resnet18.onnx, classes.txt and colors.txt under the ./networks/FCN-ResNet18-Cityscapes-1024x512/*

$ ll networks/FCN-ResNet18-Cityscapes-1024x512/*
-rw-rw-r-- 1 nvidia nvidia      159 Aug 20  2019 networks/FCN-ResNet18-Cityscapes-1024x512/classes.txt
-rw-r--r-- 1 nvidia nvidia      229 Sep  4  2019 networks/FCN-ResNet18-Cityscapes-1024x512/colors.txt
-rw-rw-r-- 1 nvidia nvidia 47137244 Sep  4  2019 networks/FCN-ResNet18-Cityscapes-1024x512/fcn_resnet18.onnx

So we can run the sample with the following command successfully.

$ ./segnet.py --network=fcn-resnet18-cityscapes-1024x512 images/city_0.jpg images/test/output.jpg

Thanks.

Here’s a table of the pre-defined segmentation models that are included with the project, along with their name to use with the --network command-line argument:

https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md#pre-trained-segmentation-models-available

BTW, for these FCN-ResNet18 ONNX models, the input blob name is input_0 and the output blob name is output_0 (these are the names of the input/output layers)

If you wanted to run the complete command manually, it should be like:

segnet.py \
  --model=/home/keith/jetson-interface-old/data/networks/FCN-ResNet18-Cityscapes-1024x512/fcn_resnet18.onnx \
  --labels=/home/keith/jetson-interface-old/data/networks/FCN-ResNet18-Cityscapes-1024x512/classes.txt \
  --colors=/home/keith/jetson-interface-old/data/networks/FCN-ResNet18-Cityscapes-1024x512/colors.txt \
  --input_blob=input_0 \
  --output_blob=output_0 \
  /mnt/cifs/NAS/ImageTraining/Testing/*.jpg \
  /mnt/cifs/NAS/ImageTraining/Testing/output_%i.jpg

Thanks! Are there any instructions for training a custom dataset to work with segnet? I know jetson-inference/README.md at master · dusty-nv/jetson-inference · GitHub has some steps for training, will those also work for the segnet? I’ve got a custom dataset i trained but it doesnt seem to have the same input/output blobs as i get an error with those.

I think i might have found something

Thanks for the help

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.