Decreased Framerate - Jetson Xavier Board - 4K video capture with (6) cameras

Hi all,

I am currently working with the Jetson AGX Xavier board with Leopard Imagings LI-XAVIER-KIT-IMX477M12-H, a (6) camera 4K capture system. (https://www.leopardimaging.com/product/nvidia-jetson-cameras/nvidia-agx-xavier-camera-kits/li-xavier-kit-imx477/li-xavier-kit-imx477m12-h/)

Currently, I am maxing out at only (3) 4K (3840x2160) @ 30fps streams on the board. I can save these streams as .h264 using the below gstreamer command.

sudo jetson_clocks
sudo nvpmodel -m 0

gst-launch-1.0 nvarguscamerasrc sensor_id=0 maxperf=true ! ‘video/x-raw(memory:NVMM), width=3840,height=2160,format=NV12, framerate=30/1’ ! omxh254enc control-rate=2 bitrate=2000000 preset-level=0 profile=1 ! qtmux ! filesink location=test1.h264 nvarguscamerasrc sensor_id=1 maxperf=true ! ‘video/x-raw(memory:NVMM), width=3840,height=2160,format=NV12, framerate=30/1’ ! omxh254enc control-rate=2 bitrate=2000000 preset-level=0 profile=1 ! qtmux ! filesink location=test2.h264 nvarguscamerasrc sensor_id=2 maxperf=true ! ‘video/x-raw(memory:NVMM), width=3840,height=2160,format=NV12, framerate=30/1’ ! omxh254enc control-rate=2 bitrate=2000000 preset-level=0 profile=1 ! qtmux ! filesink location=test3.h264 -e

The issue I am encountering is when I scale up to (6) cameras capturing at 3840x2160 @ 30fps, the frame rate drastically decreases to ~ 8-10 frames per saved video. I have done some troubleshooting on this and cannot find the bottleneck that is causing the decrease performance.

If you could please suggest a solution or some troubleshooting techniques to solve this issue of not being able to capture (6) cameras recording at 4K resolution @ 30 fps that would be great.

Thank you,
Jon

Omx plugins are getting deprecated. You may use nvv4l2h264enc instead. Note that some options have same name but different values.

I don’t have such hw, so I used videotestsrc and nvvidconv for simulating the output of nvarguscamerasrc.
Basic pipeline is:

gst-launch-1.0 -e videotestsrc ! video/x-raw, width=3840, height=2160, framerate=30/1 ! nvvidconv ! 'video/x-raw(memory:NVMM), width=3840, height=2160, format=NV12, framerate=30/1' ! nvv4l2h264enc maxperf-enable=1 control-rate=1 bitrate=2000000 preset-level=1 profile=0 ! h264parse ! qtmux ! filesink location=testh264_1.mov

6 pipelines run fine. [EDIT: I spoke to fast, I get only 15-16 fps with measurements.]
(I’m using a XavierNX in 6 cores 15W with slow clocks).

Hi there, thanks for the timely response.

So sure, I ran the suggested command and I am able to actually run the 6 pipelines fine, 3840x2160 @ 30 FPS, via inspection using ffprobe.

When I switch out the command to use the nvarguscamerasrc instead of the videotestsrc, I encounter the frame decrease on the videos. I switched to the nvv4l2h264enc with no noticeable change in the frame rates of the produced videos.

Any thoughts on why this might be?

In my case the low fps above was due to videotestsrc.
Connecting to a headless AGX Xavier, I can use a single camera (OV5693) and use tee for duplicating.

gst-launch-1.0 -ev nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=1920, height=1080, framerate=30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), width=3840, height=2160, format=NV12' ! tee name=video ! queue ! fpsdisplaysink video-sink=fakesink text-overlay=false   video. ! queue ! nvv4l2h264enc maxperf-enable=1 control-rate=1 bitrate=2000000 preset-level=1 profile=0 ! h264parse ! qtmux ! filesink location=test1.mov      video. ! queue ! nvv4l2h264enc maxperf-enable=1 control-rate=1 bitrate=2000000 preset-level=1 profile=0 ! h264parse ! qtmux ! filesink location=test2.mov    video. ! queue ! nvv4l2h264enc maxperf-enable=1 control-rate=1 bitrate=2000000 preset-level=1 profile=0 ! h264parse ! qtmux ! filesink location=test3.mov    video. ! queue ! nvv4l2h264enc maxperf-enable=1 control-rate=1 bitrate=2000000 preset-level=1 profile=0 ! h264parse ! qtmux ! filesink location=test4.mov
...
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFakeSink:fakesink0: sync = true
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 17, dropped: 0, current: 33,42, average: 33,42
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 32, dropped: 0, current: 29,86, average: 31,65
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 48, dropped: 0, current: 30,14, average: 31,13
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 64, dropped: 0, current: 30,03, average: 30,85

Measuring with fpsdisplaysink, I get 30 fps up to 4 pipelines, looks unstable with more, but it may be better than omxh264enc (19-20fps with 6, confirmed by ffprobe on files).

Probably some differences in HW/DT/driver/ISP do the difference, but I’m unable to tell more.

One thing I cannot test beacuse I’ve only one CSI cam is using 6 instances of gst-launch instead of 1 gst-launch running the 6 pipelines. You may try if this makes a difference.

Please check if the file write is the bottleneck with fakesink.

@ShaneCCC,

No, the bottleneck is not filesink, got same results with fakesinks after encoders.

I did upgrade my AGX with apt, it was less than 2 months late.

Surprizingly, now remote connecting with ssh -X or -Y no longer allows using nvarguscamerasrc:

gst-launch-1.0 -ev nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=1920, height=1080, framerate=30/1' ! fakesink
nvbuf_utils: Could not get EGL display connection
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
/GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc0.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1
/GstPipeline:pipeline0/GstFakeSink:fakesink0.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1
GST_ARGUS: Creating output stream
(Argus) Error NotSupported: Failed to initialize EGLDisplay (in src/eglutils/EGLUtils.cpp, function getDefaultDisplay(), line 77)
(Argus) Error BadParameter:  (propagating from src/eglstream/FrameConsumerImpl.cpp, function initialize(), line 89)
(Argus) Error BadParameter:  (propagating from src/eglstream/FrameConsumerImpl.cpp, function create(), line 44)
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, threadInitialize:247 Failed to create FrameConsumer
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, threadFunction:188 (propagating)
^Chandling interrupt.
Interrupt: Stopping pipeline ...
EOS on shutdown enabled -- Forcing EOS on the pipeline

Is this expected ? How can we use X11 forwarding with nvarguscamerasrc ?

@jheager2,
On the other hand, logging in with ssh without X11 forwarding, it works now @30 fps, tested up to 8 pipelines:

gst-launch-1.0 -ev nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=1920, height=1080, framerate=30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), width=3840, height=2160, format=NV12' ! tee name=video ! queue ! fpsdisplaysink video-sink=fakesink text-overlay=false   video. ! queue ! nvv4l2h264enc maxperf-enable=1 control-rate=1 bitrate=2000000 preset-level=1 profile=0 ! h264parse ! qtmux ! filesink location=test1.mov      video. ! queue ! nvv4l2h264enc maxperf-enable=1 control-rate=1 bitrate=2000000 preset-level=1 profile=0 ! h264parse ! qtmux ! filesink location=test2.mov    video. ! queue ! nvv4l2h264enc maxperf-enable=1 control-rate=1 bitrate=2000000 preset-level=1 profile=0 ! h264parse ! qtmux ! filesink location=test3.mov    video. ! queue ! nvv4l2h264enc maxperf-enable=1 control-rate=1 bitrate=2000000 preset-level=1 profile=0 ! h264parse ! qtmux ! filesink location=test4.mov  video. ! queue ! nvv4l2h264enc maxperf-enable=1 control-rate=1 bitrate=2000000 preset-level=1 profile=0 ! h264parse ! qtmux ! filesink location=test5.mov   video. ! queue ! nvv4l2h264enc maxperf-enable=1 control-rate=1 bitrate=2000000 preset-level=1 profile=0 ! h264parse ! qtmux ! filesink location=test6.mov    video. ! queue ! nvv4l2h264enc maxperf-enable=1 control-rate=1 bitrate=2000000 preset-level=1 profile=0 ! h264parse ! qtmux ! filesink location=test7.mov  video. ! queue ! nvv4l2h264enc maxperf-enable=1 control-rate=1 bitrate=2000000 preset-level=1 profile=0 ! h264parse ! qtmux ! filesink location=test8.mov
...
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFakeSink:fakesink0: sync = true
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 17, dropped: 0, current: 32,17, average: 32,17
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 32, dropped: 0, current: 29,86, average: 31,04
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 47, dropped: 0, current: 29,70, average: 30,60
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 63, dropped: 0, current: 30,33, average: 30,53
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 78, dropped: 0, current: 29,97, average: 30,42
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 94, dropped: 0, current: 30,03, average: 30,35
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 109, dropped: 0, current: 29,97, average: 30,30
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 123, dropped: 0, current: 27,76, average: 29,99
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 140, dropped: 0, current: 32,18, average: 30,24
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 155, dropped: 0, current: 29,94, average: 30,21
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 171, dropped: 0, current: 30,04, average: 30,19
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 187, dropped: 0, current: 30,00, average: 30,18

So if it doesn’t work for you with an up-to-date AGX, the issue may be in cameras or video input.

Additional note: On Xavier NX (up-to-date), the same limitation of 4 pipelines for 30 fps remains, framerate decrease with more.

export DISPLAY=:0 or 1 for ssh to launch camera.

Thanks @Honey_Patouceul for all the time you have put into trouble shooting this issue. I followed the suggestion you made on September 4th, I ran the same command and successfully had 8 pipelines running @30fps. It is when I add the actual cameras to the pipeline the frame rate drops, I believe there that the limitation may fall with the cameras/camera adapter board.

Going to continue to troubleshoot to find root issue @Adrian-LeopardImaging.

Hi jheager2,

In your case, file write speed might be the issue. If you are able to stream 6 cameras at 30 fps, then the hardware should be fine.

Best,
Adrian

Hi. Jheager2.

This is Kevin from Leopard Imaging Inc. I think we talk about this recently?

According to your description:

  1. When you streaming video out from all 6 cameras at same time through Argus. You can get actual 28fps from camera.
  2. The frame rate drop to around 10fps only when you streaming video and also save video.

Am I remember correctly?

I don’t think this issue is caused by our camera. There is three reasons:
A. You can stream video from all 6 cameras at around 30fps. For our camera’s working flow. There is no difference when you just streaming video or saving them. So the difference shouldn’t come from out camera’s processing difference.
B. Like I mentioned. I capture one frame and find the data size for one frame is around 7Mb. So we can estimate the data size for every second when all cameras were saving video at 30fps. Total data size is: 6 x 30 x 7Mb = 1260Mb/s for all 6 cameras. 210Mb/s for each camera. This is quite big data size when you write into storage of Xavier. You need to make sure Xavier’s storage (or external storage if you use) can handle more than 1000Mb every second. This is a possible bottleneck for your case.
C. Since Xavier need to encode before saving to storage. We also need to make sure Xavier can manage to encode all these data on time. This is also a possible bottleneck for your case.

Thank you.
Kevin

@Adrian-LeopardImaging, @KevinGong,

Sorry if you’ve got confused by my comments in this topic from @jheager2.

In post #6, I’ve shown that with a standard AGX Xavier dev kit using its eMMC for R32.4.3 using a OV5693 camera module from TX1/TX2 devkit, it was possible to encode and store 8 (replicated) streams. You may try on your end.

So my guess is that there is something different with your HW or SDK that makes this wrong. I don’t have your HW, so you’ll probably advise @jheager2 with some more accurate information than what I can provide.

I do appreciate your products and support here, though.

Hi. All.

After working on this and customer confirm. The issue solved now.

This issue is not related to driver or other function of Xavier kit. Finally we notice customer using other resolution to record video which not listed in our driver. So we believe Xavier platform has to cropping each frame before display and record. This should be too much extra processing for Xavier to handle. After customer using our default resolution which defined in firmware. Issue solved.

This issue should be closed now. Thank you all.