Video recorder for 7 ip cameras 1080p

Hello.
I want to turn “NVIDIA Jetson Nano” into a video recorder for 7 ip cameras 1080p.

Can you please tell me if I can do the following with the “NVIDIA Jetson Nano”:

  1. Write 7 video streams h.264 / h.265 1080p/15fps without decoding.
  2. Analyze 7 video streams h.264 / h.265 1080p@15fps to detect objects (man, cat, dog, car) and, if an object is detected, to record (initiate as motion detection). Video encode in 720p@15fps.

or a variation with mjpeg for motion detection.

  1. Write 7 video streams h.264/h.265 1080p@15fps without decoding.
  2. Analyze 7 video streams MJPEG 1080p@5fps to detect objects (man, cat, dog, car) and, if an object is detected, to record (initiate as motion detection). Video encode in 720p@5fps h.264 / h.265.

Will NVIDIA Jetson Nano have enough power?

Specification:
Video decoding:
8 streams in 1080p @ 30 Hz
18 streams 720p @ 30Hz (H. 264 / H. 265)
Video encoding:
4 streams 1080p @ 30 Hz
9 streams 720p @ 30Hz (H. 264 / H. 265)

Hi,
You may try DeepStream SDK. By default there is a sample config in

opt\nvidia\deepstream\deepstream-5.1\samples\configs\deepstream-app\source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano

It is 8 source in h264 decoding. The model is ResNet10. The parameter interval=4 is set to periodically do inferencing to frames.

Generally the loading of model decides the performance. In multiple sources, it is challenging to process every frame for Jetson Nano. You would need to set interval per the model

Thx.
Will I be able to get 5 FPS on each stream when capturing 7 IP cameras MJPEG streams 1080p@5fps?

Hi,
Running single MJPEG decoding the fps looks good:

$ gst-launch-1.0 videotestsrc num-buffers=300 ! video/x-raw,width=1920,height=1080,format=I420 ! jpegenc ! filesink location=1080p.mjpeg
$ gst-launch-1.0 filesrc location= 1080p.mjpeg ! jpegparse ! nvv4l2decoder mjpeg=1 ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 -v
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 171, dropped: 0, current: 341.60, average: 341.60

7 video streams MJPEG 1080p@5fps should be fine for decoder.

1 Like

I run six video streams in two locations, recorded by ffmpeg running on Raspberry PIs (four ffmpeg instances on one machine, two on another).

I use the camera’s event detection to send snapshot images to a Jetson Nano running Yolo V3. Yolo scans each 1920x1280 image in ~1 second. Probabilities above a threshold generate alerts, including the static image and a link in to the video archive.

So, with this configuration a single Jetson Nano can easily keep up with six or more cameras. And the modular architecture makes it easy to add or remove video- or image-servers as needed.

Cheers
Sean

1 Like
  1. What program do you use to detect motion?
  2. Are you sending exactly 1 picture from the Raspberry PI to the Jetson Nano? And what’s the point of sending 1 picture, because a person/dog/cat may appear a little later, for example, after 1 second. For example, I open the door, the camera captures the movement “opening the door” and only after 1 second I appear as a human being.
  3. If the image is sent from 6 cameras at the same time, then the Jetson Nano will process it for 6 seconds, right?

The cameras have motion detection within their firmware. Mine are all Dahua but I pretty sure that Hikvision and others do the same.
The snapshots go directly from the cameras to the Nano, not via the Raspberry PIs. And, yes, I do send more than one snapshot, for the reasons you describe. There is a minimum delay between snapshots, set in the cameras. I use 5 seconds in most cases.
I have a maximum of two cameras covering the same physical space, so it’s unlikely that I will have more than one snapshot per second (a zombie invasion, approaching the house on all sides, would overload the Nano… but by that time notifications would be too little, too late).
If the ~1 second processing time is too long it could easily be reduced, by using a different Yolo model, by changing Yolo’s parameters or by reducing the snapshot size. But my experience is that Yolo generates MUCH more accurate results with the large image and full model, so I’m happy with the ~1 sec processsing time.

1 Like

Thanks.
And after the camera sent the image to NANO and NANO found the person, what happens next? I want to understand the logic of your circuit.

Intelligent Video Analytics for 8 channels at 1080P 30FPS powered by DeepStream SDK - DeepStream on Jetson Nano - YouTube

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.