FPS problem

Hello everyone.
I am testing the FPS produced by the Jetson Nano on various models. Taking the SSD-MobileNet-V1 network as a reference, I noticed when I use the parser:

parser.add_argument("–network", type=str, default=“ssd-mobilenet-v1”, help=“pre-trained model to load (see below for options)”)

net = jetson.inference.detectNet(args.network)

I get less FPS (about 10 FPS) than when I use the following line:

net = jetson.inference.detectNet(argv=[’–model=ssd-mobilenet-v1.onnx’, ‘–labels=labels.txt’, ‘–input-blob=input_0’, ‘–output-cvg=scores’, ‘–output-bbox=boxes’, ‘–threshold=0.5’])

I state that in the last command the network is personally trained by me (but the architecture is the same).
Can you explain to me why?
Thank you.

Hi,

Based on your statement, the model used for parser.add_argument(.) and jetson.inference.detectNet(.) are different.
(Please correct me if this is not true.)

Although the architecture is similar, it’s possible that the operations become different when the training time.
Would you mind testing this with exactly the same model and sharing the result with us first?

Thanks.

Thanks for the reply.

The models have the same architecture, so I assume they are the same.
When I use the parser, it fetches the pre-trained model provided in your repository.
When I use the --model, it fetches my pre-trained model (having the same architecture as the one provided by you (at least I hope)).

What’s wrong?

Thank you.

Hi,

Would you mind sharing your model and the corresponding source to reproduce the performance difference with us?
We would like to check it further in our environment first.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.