Will there be a centralised workflow for creating custom models for semantic segmentation on Jetson devices?

I would just like to know if Jetson will ever provide a centralised workflow for creating custom models.

I went through the basic guide provided in Dusty’s repository, and that was quite helpful. Where I got kind of stuck was the generation of models and how we then use that custom model.

Is there some place where I can learn about semantic segmentation Jetson devices more in depth? I am happy to pay.


Could you share the issue/error on your custom model?

jetson-inference uses TensorRT as a backend inference engine.
So please convert your model into ONNX format and deploy it with TensorRT.


I have been reading through the tutorial again and I think I follow now.
I need to get my annotated data set, feed it into PyTorch with ONNX as the output format, and then I can provide that model to segNet with the image I want to process.

I will be getting started this week.