Using custom model on of jetson-inference

Hi all,

I’ve trained a semantic segmentation model using cityscapes+bdd100k dataset on pytorch. I wonder how I can integrate it into jetson-inference, specifically turn it into a network of I’ve looked in the documentation but I can’t find anything. Thank you.

Best regards

Hi @kekboiA, what is the network architecture of your model? The segNet class in jetson-inference is setup for FCN-ResNet (and FCN-Alexnet, although that isn’t used so much anymore).

If your model uses a different input pre-processing (i.e. RGB vs BGR, mean pixel subtraction or normalization applied) or the model’s output layers are different, then segNet’s pre/post-processing code would need to be adapted to support it.

Hi dusty, the network architecture is UNet. Input is RGB and there is normalization applied.

I haven’t used UNet before with jetson-inference, so you would need to check the pre/post-processing code from here:

Assuming my model is processed, do I use the file on the .pth model? After that what do I do?

The script from my pytorch-segmentation repo is setup to export FCN-ResNet18 models, not U-Net. In particular, you would need to change out this line where the model architecture is instantiated for your U-Net architecture:

Otherwise I believe the ONNX export process would be pretty similar.

You can see example command of running a user-provided FCN-Alexnet caffe segmentation model in jetson-inference here:

For onnx model, you would probably use a command like:

./segnet-console img_input.jpg img_output.jpg \
--model=$NET/model.onnx \
--labels=$NET/labels.txt \
--colors=$NET/colors.txt \
--input_blob=input_0 \