How to create a benchmark model for ssd_mobilenet_v2

In the Jetson Nano benchmark (https://developer.nvidia.com/embedded/jetson-nano-dl-inference-benchmarks), the processing performance of “ssd_mobilenet_v2” is 39FPS.
When I tried “ssd_mobilenet_v2”, it was 16FPS-18FPS.
How did you create the benchmark model?

Hi,

May I know how do you benchmark the ssd_mobilenet_v2 performance?
Are you following this instructions?

If not, please give it a try.
Thanks.

I’m trying the instructions.
Is it possible to use the model (sample_unpruned_mobilenet_v2.uff) provided there to run it in Python code or Deepstream?
I would like to confirm the processing performance when running on Python.

Hi,

It’s recommended to use C++ for benchmark.

But we do have some python-based Deepstream sample here:

Thanks.

Thank you for your answer.
I will try using python-based Deepstream.

After all, is it difficult to get a speed of 39 FPS (30 FPS or more) if it is Python based?