Best way to deploy object detection model in jetson orin nano

Hi new to jetsons! I was wondering what would be the best approach to deploy a trained model in the jetson orin to utilize all its potentials? Currently I am using it like a desktop and have installed all dependencies and running the inference model as I would in my regular system. But I want to know if using docker or a more systamic approach would be better to deploy ML models here and to use it standalone for detections! Thank you


Usually, we recommended to use our TensorRT library to deploy the DNN model for better performance.
Below is some examples for your reference:


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.