Jetsen.inference detectnet load my CUSTOM model

Can anyone provide me with a concrete example on how to load my custom transfer-trained model into detectnet?

Ideally a Caffe or ONNX example.

I’ve been trying to do it with the example provided with Jetson Nano installation.
Apart from the --help it has no documentation.

I’ve been trying to pass the model and labels as arguments in terminal, but it never works. It’s always telling me that it cannot find the ssd-mobilenet-v2 internal model!! Looks like it’s always trying to find an internal model wich came with the intallation.