Choosing the correct architecture for 100's of cameras

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)

GPU

• DeepStream Version

6.0

I have an application that will have 100’s of rtsp cameras per zone. Currently we have a test site with 100 cameras in a single zone. Some sites will have multiple zones, say 6 zones and each zone could have 150 cameras.

My application is similar to tracking people with action recognition. Think of something like a mall to track individual people as they walk around and identify what they are doing - walking, eating etc.

I am choosing hardware and I want to make sure I have chosen the correct architecture for this.

My current approach is to use deepstream with an object detector PGIE and then pass on to a action recognition SGIE. Information will be output over kafka to allow a third-party tracker to accumulate the location information from each camera and track movement across cameras.

Inference must be done on site due to bandwidth limitations. I have chosen the A5000 GPU since it seems to give the best performance vs cost. I will need multiple A5000 since one is not enough, perhaps e.g. 10 GPUs depending on framerate.

I’m not sure whether to maximise the number of GPUs in a single server PC or have lots of smaller servers with fewer GPUs per server. I think with the deepstream / kafka approach this doesn’t matter since the third-party tracker will consolidate anyway.

Questions:

  1. Is going for A5000s here sensible? They seem to give good performance per £. There are so many cards that it’s hard to know how to pick never mind which one.

  2. Is using multiple Deepstream instances per zone a sensible thing to do? Would one big PC be better here? One limiting factor is h264 decode, but the A5000 has two NVDEC. I think a single deepstream instance is locked to a set GPU anyway since the benefits come from the decode and inference never leaving the GPU.

  3. Should I be using the Deepstream Triton server to “pool” GPU resources somehow instead of a deepstream process per GPU?

  4. Should I be looking at something more enterprise-like with virtual GPUs?

Any input appreciated.

It depends on your own decision based on your requirement and resource.

Is there any technique questions?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.