Parallel inference Jetson Nano

Hi there,

I have a project with about 4 cameras and each image from the stream needs to undergo inference. Using the pednet model I get about 7 fps on a single camera. The fps drops by half for each camera I add. The camera streams are obtained in parallel using Gstreamer with no problem, but how do I do inference in parallel such that I maintain 7 fps for all camera streams?

Kind regards

Hi

You need to optimize your model, to get around 25+fps on single camera.
Then you can get 7 fps on batch=4 inference.

Hi Juns,

Thank you for the response. When reading through reviews and specs of the nano, many claim that the Nano can “also run different neural networks in parallel”. If so, will I not be able to do inference from two different camera stream in parallel?

Kind regards

Hi

No, you can.
You can either do inference from two different camera stream in parallel with same DL model or different model, as long as the memory is enough.
Your case is just a normal case, as long as you choose the proper model, yolo tiny for example, it will work.

You can check a little more into the Deepstream nvidia provided, it will show you how to achieve the case.

Hi,

Thank you. Are there any Python specific demos which demonstrates parallel inference? I find learning by example to be invaluable .

Hi

You can find the python app of deepstream here, I believe it can help.

1 Like