Loading custom Object Detection Model to Jetson Dev for real-time use

Hello! Apologies as I am likely missing a key resource somewhere but i’ve been working on this on and off for a few days now and I’d like to reach out before spending more time spinning my wheels.

I have gone through the Dusty’s Hello AI World tutorial project and created my own MyDetection.py for a real time use of a preloaded MobileNet model with my cameras no problem, I have my own custom trained MobileNet (TF 2) model which I would like to do the same thing with, load it onto the jetson point it at my process and see results realtime. I am having a heck of a time figuring out how exactly to get my model from my PC to running on the nano. I have it as a .pb file and I’m not super sure on what my next steps are. Optimization with TRT? Something with ONNX? Can someone point me to a good resource I likely missed? Thanks!

Hi @zkuiper, I have mostly been using PyTorch → ONNX models with the Hello AI World project, although there are some legacy Caffe and TensorFlow models in there too. You can give the TF2 → ONNX conversion a go.

What I would recommend is making sure that trtexec tool can load the ONNX (found under /usr/src/tensorrt/bin). This will confirm that TensorRT can in fact load/run your ONNX model.

Also, you state that you have a MobileNet model (which is a classification model), but you are trying to run it through the detection program. Can you confirm which it is? Either way, you will need to make sure that the pre/post-processing performed in jetson-inference/c/imageNet.cpp or jetson-inference/c/detectNet.cpp is the same as the model expects (i.e. matches the pre/post-processing that was done in TensorFlow)

Hey Dusty, thanks for the quick reply.

I am using an SSD-MobileNet v2 for object detection. I already built and tested the model to goo effect. The nano deployment is for edge deployment testing.

Looking into what you are saying I would do TF2 → ONNX and then deploy the ONNX file onto the Jetson?

Yea, this is what I would try first. If TF2 is able to export the ONNX, then check that TensorRT can load the ONNX via trtexec. Remember there still could be a non-trivial amount of work however to re-implement the pre/post-processing.