How to learn deployment?

Predecessor, Hello, I am a graduate student in grade one students, my direction is embedded in artificial intelligence, in particular is deep learning model deployment, but I don’t know how to learn it and what is importance about it , because I have not learned deep learning, now I have a TX2, I use it to run a Hello World AI program. I am eager to hope you can give some guidance , such as what to do next, and then recommend some open source projects .

Hi @924971015, here are a bunch of open-source projects that you can try on Jetson:

Have you tried the steps of Hello AI World tutorial where you train your own models with PyTorch?

That can be useful for highlighting the training->inference deployment workflow.

Thanks for your answer, I have tried the entire steps of the Hello AI World tutorial, and successfully achieved it.but I do not know what concrete steps I should do, so I am looking forward to your guidence.

Perhaps you can expand more what you are meaning to learn about?

Typically deployment means the inferencing portion, after the model is trained. Inferencing can be done in the framework the model was trained in (e.g. PyTorch or TensorFlow), however on NVIDIA GPUs and devices such as Jetson, NVIDIA TensorRT is used to accelerate the inferencing. This is what Hello AI World uses to run the inferencing and deploy the model.

To learn more about TensorRT, you can check the samples on your Jetson under /usr/src/tensorrt and find the documentation here:

Hello AI World uses TensorRT automatically - you can find most of that code here:

Thank you very much, I will carefully to learn it.

Now I am directly delploying the yolo3 to TX2 without using TensorRT, but there appear error,I try some ways to solve it, but it never worked, I am looking forward to your reply,thanks a lot.

Hi @924971015, please post a new topic about the yolov3 problem that you are encountering, along with text from the terminal. Thanks.

Thank you for your reminder, I have solved this problem.