Requirement to train Jetpack 4.6 models on dGPU environment

Hi,

I am working with Jetpack 4.6 on Jetson Nano and i have downloaded de nvidia_inference repository and all works fine, but I want to train my the models used in the examples on a dGPU environment (Ubuntu 20.04 LTS on INtel Core i5 10th Gen with a GeForce GTX 1060 card).

Could you please tell me what requirement do I need to fill in order to train these models in the mentioned dGPU environment and then deploy them on Jetson Nano with JetPack 4.6?

Hi @xjrueda,
Do you have a link to those models, just to check them?
In any case, a workflow that you can use is Torch and I’ve used is (dGPU/workstation do training here)->Onnx(Jetson port from PyTorch using Onnx)->TensorRT(Jetson/final model do inferences). Like in here. There are other frameworks that Onnx supports like TensorFlow. In theory those frameworks can be used on Jetson, at least most of them, but if you want to take advantage of all the available hardware, I suggest to do the porting to TensorRT.

Regards,
Andres
Embedded SW Engineer at RidgeRun
Contact us: support@ridgerun.com
Developers wiki: https://developer.ridgerun.c om/
Website: www.ridgerun.com

Yes, all models that are available to use with Jetpack, my idea is to trains some os these modes with custom labels amd deploy them on Jetson Nano




Hi,

As mentioned above, the flow TensorFlow/PyTorch → ONNX → TensorRT should work.
Thanks.

@xjrueda, yes you can run the PyTorch train.py and train_ssd.py scripts from jetson-inference on an x86 Linux PC + dGPU system. I’d recommend installing the NVIDIA Container Runtime on your PC and then running them in the NGC PyTorch container (or jetson-inference actually has an x86 container, if you clone jetson-inference to your PC and start the docker/run.sh script)

The PyTorch training scripts in jetson-inference repo are git submodules, so you can clone those individually to your PC if you prefer (as opposed to cloning the whole jetson-inference repo, which you can do too):

Then as @AastaLLL mentioned, you can export your PyTorch model to ONNX on your PC, and then copy the ONNX model (along with it’s labels.txt file) over to your Jetson and run it there like normal.

Thanks, Yes solved installing pytorch, torchvision to train with train.py and train_ssd.py on x86_64 and deployed the trained model on jetson nano. Solved!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.