I would like to train and do inference on a Host (Ubuntu 18.04, x86_64, GPU Nvidia GTX 2080) mainly for object detection on images (SDD) with PyTorch (first choice).

Therefore I would like to train via transfer learning or creating own networks, test it and if everything goes well, I would like to convert the network to onnx and use it on a jetson-nano.

First I tried to get the jetson-inference working on my host. I can train, convert to onnx but could not do inference with “python3”. As I know, this isn’t supported.

Now I look for a tool for my host. But I got confused. Should I work with Nvidia TLT or DIGITS. Can someone explain the differences and could recommend one for my tasks?

Thank you very much.

Regards, Klaus