Camera for high speed inference on the Nano

Hi,

I want to do some indoor (normal to dim room lighting) high speed inferencing on a Jetson Nano. I am looking for a cost effective solution and color isnt really needed. I believe anything around 780p 120-240 fps might do the job. I am not too concerned if it is USB, or mipi csi-2 but cheaper is better :)

I have looked through the camera vendors provided on the FAQ at some length but I can’t tell what a reasonable price would be, or how well it would work with the Nano. What should I expect to pay for a monochromatic 1.5-3mp camera with a global shutter ~200 fps for the Jetson Nano?

Thanks!

Hi chrisnjackson,

I can’t give the answer for which one is suitable for your case due to the consideration for everyone is different. But I will suggest to get a general one as a start to implement your project first to make sure the inference result is stratified, then do the upgrade on camera.

Thanks

1 Like

That likely won’t be possible. High frame rates require a high shutter speed, meaning less light hits the sensor, meaning your image will be very very noisy.

You can do 120fps at 720p with an IMX219 based CSI camera, but you won’t be able to infer at that speed either (at least on a Nano). You could probably, however, do an inference on every 4th frame or so and interpolate the results.

As far as what cameras work, you could try this. It’s compatible with jetson nano and has the IR filter pulled off so you can use it with IR floodlights. That might be suitable for your low light application.

Super, thanks. Yes, part of it is just to get some experience around what is hard and what is impossible or ultra expensive. This seems like a reasonable place to start.

I’ve inferenced at 90 Hz on a Raspberry Pi 3, on the CPU!

The trick is to run a small model, and pipeline it – you get a new inference every 11 milliseconds, but the latency is 33 ms. I round-robined each image to a separate core, with three copies of the same model. (I was using a port of caffe2 at the time IIRC.)

I imagine you can do something similar but much better on the Jetson. Run at 90 Hz, run a small model, and inference at full rate.