Will there be a centralised workflow for creating custom models for semantic segmentation on Jetson devices?

I would just like to know if Jetson will ever provide a centralised workflow for creating custom models.

I went through the basic guide provided in Dusty’s repository, and that was quite helpful. Where I got kind of stuck was the generation of models and how we then use that custom model.

Is there some place where I can learn about semantic segmentation Jetson devices more in depth? I am happy to pay.

Hi,

Could you share the issue/error on your custom model?

jetson-inference uses TensorRT as a backend inference engine.
So please convert your model into ONNX format and deploy it with TensorRT.

Thanks.

I have been reading through the tutorial again and I think I follow now.
I need to get my annotated data set, feed it into PyTorch with ONNX as the output format, and then I can provide that model to segNet with the image I want to process.

I will be getting started this week.

Is this still an issue to support? Any result can be shared?

I still haven’t had the time to get started. But only reading up on the process more.
I do have an idea of how to do it, so that’s fine. However, my original question still stands - will there be a centralised workflow for this from Nvidia at some point?
e.g. a gui application which can take our model annotated images and just do everything with pytorch under the hood, exporting the ONNX format models, to which we can just feed into segnet?