Hi @ViinceL, I haven’t used RetinaNet before with jetson-inference. It appears to be an object detection model, and so you would need to make the pre/post-processing the same as your model expects:
You probably also need to adjust the input/output layer names, and possibly the number of output layers. You may also want to look into other ways like ONNX Runtime, torch2trt, or the TensorRT Python API if these are easier for you to use with your custom model.