Some questions about jetson.inference

Hello. I use SSD mobilenet v2 from jetson.inference. I want to increase the fps. According to the benchmarks, it is possible for SSD mobilenet v2 300x300 performance but I can’t see the potential 39fps. I am changing the input resolution but the fps is always around 25fps.

1)What is the problem?
2) Can deepstream be combined with jetson.inference? You can give an advice.
3)When I run the code, I can see that there are kind of “jetson.utils – cudaFromNumpy() ndarray dim 0 =224”. I don’t want to see this but where is the print code of this?


Could you share us where is the 39fps from?
Some benchmark targets for end-to-end pipeline while some may profile for inference time only.
Just want to check it first.

1. Please make sure to maximize the device performance first.

$ sudo nvpmodel -m 0
$ sudo jetson_clocks

2. No. They are all end-to-end pipeline.
Just choose one should be enough.

3. Here is the print code:


I know the default mode is 10 watts but also I already tried sudo nvpmode -m 0.

Here is the benchmarks:

Thanks for your interest

Hi @iriaslan, the benchmarks use SSD-Mobilenet-v2 model that was trained on the 37-class Oxford-IIIT Pet Dataset, whereas the model in jetson-inference is using 90-class MS COCO. By reducing the number of object classes in your model, you get higher performance.

I am developing an autonomous rc car and I am using 224x224 image as an input. If I train SSD mobilenet v2 model with 224x224 images and using just 10 classes, Will it increase the fps?

One more question. When I import the library jetson.utils , it imports all the files in the utils. I don’t want to use all the files. Just cudaFromNumpy. What should I write to import only CudaFromNumpy file? Will it work without other libraries? Thanks

Yes, doing that should increase the FPS as well.

Since jetson.utils is a C extension module for Python, it isn’t importing Python files. So I’m not sure how to just import only the cudaFromNumpy() function, sorry about that.