Hello everyone, I have two questions.
I used scripts from GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson. on jetson nano to create an object detection model on custom data and it worked fine. Now, I need to enlarge my dataset and retrain and am afraid that Nano will take too much of time.
I’m wondering if it is possible to train a model on a different machine (my own computer or some cloud service) and then render it jetson nano compatible using tensorrt.
My model detects only one class of objects, can I exploit this fact in order to increase the framerate?
Thank you for your consideration.