How to custom developing ml model in Jetson nano instead of jetson inference object detection problem?
- I want to develop own deep learning model and deploy jetson nano board
Any suggestion?
How to custom developing ml model in Jetson nano instead of jetson inference object detection problem?
Any suggestion?
Hi @VK01, you can train your own object detection models in jetson-inference by following this part of the Hello AI World tutorial: https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-ssd.md
However, let’s say that you want to use your own custom network architecture. Some things you could do:
use PyTorch to train the model, and just run the inference in PyTorch also on the Nano (you can install PyTorch on your Nano). You could use a library like torch2trt to accelerate it with TensorRT, without really changing your PyTorch code.
use TensorFlow to train the model, and just run the inference in TensorFlow also on the Nano (you can install TensorFlow on your Nano). You could use a library like TF-TRT to accelerate it with TensorRT, without really changing your TensorFlow code.
train your model using TAO Toolkit, and deploy it to your Jetson using DeepStream
Those are just a few of the ways that still give you the option of deploying it with TensorRT on your Jetson for higher performance.
Thanks! i will try this above mentioned ways
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.