Jetson nano + custom data + training and optimisation

Hello everyone, I have two questions.

  1. I used scripts from GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson. on jetson nano to create an object detection model on custom data and it worked fine. Now, I need to enlarge my dataset and retrain and am afraid that Nano will take too much of time.
    I’m wondering if it is possible to train a model on a different machine (my own computer or some cloud service) and then render it jetson nano compatible using tensorrt.

  2. My model detects only one class of objects, can I exploit this fact in order to increase the framerate?

Thank you for your consideration.

Hi , UFF and Caffe Parser have been deprecated from TensorRT 7 onwards, hence request you to try ONNX parser.
Please check the below link for the same.

Thanks!

Hello! Thanks for a quick response. I’ll check it out. Thanks again!