Jetson nano and 4 cameras object recognition

Hello all,

new to ai and nano, etc., therefore please excuse me in advance.

_would like to build an embedded system to observe and detect object within a space, such as a cube with the Jetson nano.

_was thinking to put a camera on each cube side (vertical) and capture video and make inference, eventually detect different object. Is this conceptually the right way to go?

There a different types of cameras, including connections (eg. usb and mipi). It looks nano can use 2 mipi, more with carrier, however, I was thinking to start with the usb… is this ok or should I aim for mipi? Actually what is the price/performance ratio?

What ie. IDE to use to put everything together ? Eg. was thinking to start with python … however, what would you suggest to use? Eventually would like to use GPIO, eg. deep learning lib. (Tensorflow) including wireless, etc.

Thank you in advance for your suggestions!

Best.

Hi,

The approach sounds good.

The biggest problem is that you will get different observation cross the position and rotation.
So the detector needs to know the every possible look of the object.

For the camera, please check our support list first.
https://elinux.org/Jetson_Nano#Cameras

Not all the CSI/USB cameras are supported by the Jetson.
It’s recommended to choose one from our camera partner.

For the library, it’s recommended to use our Deepstream SDK.
The library optimizes the whole pipeline of camera&inference.
It also has several multiple input usecase, which is a good start for your requirement.
https://docs.nvidia.com/metropolis/deepstream/4.0/dev-guide/index.html#page/DeepStream_Development_Guide%2Fdeepstream_app_architecture.html

Thanks.

Thank you for the reply, it helps a lot!

Regarding the “biggest problem” maybe I should say more about the solution why I am looking for 4 cameras: the idea is to identify different objects cca. 1000 within a given cube, but not necessarily at the same time… for example, I am worried that the bigger object would cover ( view ) the object behind , etc. Hence I am looking to see it also from the other sides, however, each side’s view can be blocked due to the bigger object in front, I guess the only solution then is to have a camera at the top of the cube-birds eye view, this will be addressed later if necessary, I guess nano can handle even more cameras… btw what is typical solution for such cases?

With regard to the inference, was thinking to analyse the input from each camera, and try to find the difference between previous frames (btw analyse the whole space that camera can see)… running multiple individual ann, each for each camera view and then continue with the logic … The logic could, for example, follow : if an “x object” is found on the “right side” and the “x object” is also found on the “front side”, while on the left and back side we have a different detection, namely we can not find the complete shape of the object at hand, maybe because it is too big or the surface does not mach the “x object”, etc , than we can infer that “x object” is found, and so on. Would you think this is the right way to think about that?

Anyway, I would be more than happy to tackle this with other working solutions, or study the examples how this was tackled before… not necessarily reinventing the wheel.

Looking back, the idea is to identify different objects within a given cube space - hence I am looking for 4 cameras at each of the cube’s horizontal side. However, is there a way to map what cameras capture and create a smooth 3D look around-something like stereo vision, this rises the question about cameras with the depth of field (eg. ZED mini), but within a cube, map the view… for example. In addition, what combination of ANN, CNN, etc. would be useful here, not just for the object detection, but to stitch the view together and then go on with the inference?

What do you think…

Best.

Hi,

I wanna buy Jetson Nano card for AI but, I have to learn Jetson Nano is compatible or not with any

MPI(Myocardial Perfusion Imaging) cameras.

Thank you.

Indeed, the MIPI is not the same as MPI, but you know that… I guess you are looking for a different sensor that is found in cameras I am looking for. In addition, I would guess you would use the I2C connection …

I would love to read more about the explanation with regard to connecting several cameras to the Jetson Nano. For example, in “PRODUCT DESIGN GUIDE NVIDIA Jetson Nano” on p32 under section 6: we can read more about the MIPI video input… in fact "Jetson Nano brings twelve MIPI CSI lanes to the connector. Three quad-lane camera streams or two quad - lane plus two dual- lane camera streams or one quad - lane plus three dual - lane camera streams are supported. "

So, can we conclude that I cannot connect 5 cameras to the Jetson Nano? Or rather what arbitrary connections are possible, how can the number of cameras be exploited ?

Help is much appreciated, especially how to look at the potential connections…

Best.

Please have a look at our Nano camera (e-CAM30_CUNANO) and SteereoCAM.

e-CAM30_CUNANO is a 3.4 MP 2 lane MIPI CSI-2 custom lens camera board for NVIDIA® Jetson Nano™ developer Kit. This camera is based on 1/3" inch AR0330 CMOS Image sensor from ON Semiconductor® with 2.2 µm pixel.

STEEReoCAM™ is a 2MP 3D MIPI Stereo camera for NVIDIA® Jetson AGX Xavier™/TX2 developer kit with improved accuracy and depth range. This MIPI Stereo camera is based on 1/2.9" OV2311 global shutter CMOS sensor from OmniVision.

We are planning to launch STEEReoCAM™ for NVIDIA Jetson Nano Developer Kit. We’ll keep you informed.

Thank you for your offer, however, I am still not sure how can I connect e.g. 5 e-CAM30_CUNANO ?

Jetson NANO development kit will support an only a single 2-lane camera. So, you can connect only one 2-lane camera in it. You need to go for custom carrier board for Jetson NANO module to connect multiple cameras 3 x 4-lane or 4 x 2-lane cameras.

Again, I am still not sure how to connect 5 cams…

Best.