Need to pre-process a custom model to fit with Jetson Inference?

Hi !

I have some troubles to use my custom model (RetinaNet) for an inference on my Jetson Nano 2Gb with an USB Camera.

Everytime i have the error “Can’t load model” and it looks like i have some problem with input/output layers.

Is there any pre-process to do on the custom model to fit with the jetson inference ?

Thanks for advance

Hi @ViinceL, I haven’t used RetinaNet before with jetson-inference. It appears to be an object detection model, and so you would need to make the pre/post-processing the same as your model expects:

You probably also need to adjust the input/output layer names, and possibly the number of output layers. You may also want to look into other ways like ONNX Runtime, torch2trt, or the TensorRT Python API if these are easier for you to use with your custom model.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.