How can we load a network on a PC ? While configurations?

Hi,

I would like to load a custom model on my computer and extract it, working with Windows 10, I would like to know what are the best possible configurations ? working with a Linux Ubuntu virtual machine (Wouldn’t there be problems with CPUs that may not work with Virtual Box?) ? downloading Jetson-inference directly on my Windows (however I do not know the download commands) ? This is in order to extract it later to free my Jetson Nano to work without drawing its resources.

Thanks.

T.

Hi @theo17300, the jetson-inference models are run through TensorRT, and TensorRT needs to be run onboard the Nano because the first time you load a particular model, TensorRT optimizes it for the particular device. If you were to run it on PC, the optimized TensorRT engines wouldn’t be compatible with the Nano.

1 Like