PeopleNet inference using DeepStream

I am going to buy a system with TeslaT4 for pedestrian detection.
I have 80 IP cameras to stream from nvr.
The requirement is to complete all 80 images (streaming + inference) in 1 sec.
Is it possible?
In this page, it is claimed as 1043fps.
Is it possible to finish all 80 images in 1sec?

AFAIK, normally, the fps of the IP camera is 25 or 30 fps, i.e. 25 or 30 frames per second.
Per your requirement - 30 images / sec, do you only need to process 1 fps per camera?

T4 HW Decoder capability can be found in NVIDIA VIDEO CODEC SDK | NVIDIA Developer .
You can cross check its H264/H265 capability with your 80 camera input frames & format.

The fps of the pipeline can achieve also seriously depends on the inference models you will run on the GPU, IOW, how many fps the inference models can process.

As long as I can finish all 80 cameras in 1sec, I meet the speed requirement.

I am testing now using PeopleNet in Xavier using offline mp4 videos. Eight streams inference in Xavier using eight images in one batch using Deepstream is not realtime. A bit lagging. Is it because of display?

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Performance.html#tlt-pre-trained-models

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.