How to balance the CPU loading when triggering multiple camera for video capture

Hi

I have 12 camera to connect to NVidia Xavier-Industrial and share the video streaming through Gstreamer RTSP server from 10G Ethernet.
When I trigger the video capture, the streaming appears to lag very much, and the frame rate looks not so good.
The resolution is 640x480 / 60fps.
Is it possible to balance the capture loading across multiple CPUs?

Could you boost the clocks to check if help on it.

sudo nvpmodel -m 0
sudo jetson_clocks
sudo su
echo 1 > /sys/kernel/debug/bpmp/debug/clk/vi/mrq_rate_locked
echo 1 > /sys/kernel/debug/bpmp/debug/clk/isp/mrq_rate_locked
echo 1 > /sys/kernel/debug/bpmp/debug/clk/nvcsi/mrq_rate_locked
echo 1 > /sys/kernel/debug/bpmp/debug/clk/emc/mrq_rate_locked
cat /sys/kernel/debug/bpmp/debug/clk/vi/max_rate |tee /sys/kernel/debug/bpmp/debug/clk/vi/rate
cat /sys/kernel/debug/bpmp/debug/clk/isp/max_rate | tee  /sys/kernel/debug/bpmp/debug/clk/isp/rate
cat /sys/kernel/debug/bpmp/debug/clk/nvcsi/max_rate | tee /sys/kernel/debug/bpmp/debug/clk/nvcsi/rate
cat /sys/kernel/debug/bpmp/debug/clk/emc/max_rate | tee /sys/kernel/debug/bpmp/debug/clk/emc/rate

Hi @ShaneCCC

The commands you provided have been executed already, but the frame rate is still no good.

Confirm in local instead of RTSP for the frame rate to check if network cause the problem.

Thanks

Hi @ShaneCCC

Can you suggest me how to check the frame rate without a local camera preview?
My PCB board has no HDMI output but only network streaming.

Have reference to below command.

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=1920,height=1080,format=NV12' ! nvvidconv ! fpsdisplaysink video-sink=fakesink --verbose

@ShaneCCC

My camera does not support NV12 format so I change the command to UYVY.

gst-launch-1.0 nvarguscamerasrc ! ‘video/x-raw(memory:NVMM), width=1920, height=1080, format=UYVY’ ! nvvidconv ! fpsdisplaysink video-sink=fakesink --verbose

But system replies failed.

WARNING: erroneous pipeline: could not link nvarguscamerasrc0 to nvvconv0, nvarguscamerasrc0 can’t handle caps video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)UYVY

The OS I tried is L4T 35.3.1.

The nvarguscamerasrc don’t support YUV format try v4l2src or nvv4l2camerasrc

@ShaneCCC
nvv4l2camerasrc works and below is the relative information.

Command log

~$ gst-launch-1.0 nvv4l2camerasrc ! ‘video/x-raw(memory:NVMM), width=1920, height=1080, format=UYVY’ ! nvvidconv ! fpsdisplaysink video-sink=fakesink --verbose
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFakeSink:fakesink0: sync = true
Setting pipeline to PLAYING …
New clock: GstSystemClock
/GstPipeline:pipeline0/GstNvV4l2CameraSrc:nvv4l2camerasrc0.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)UYVY, interlace-mode=(string)progressive, framerate=(fraction)30/1
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)UYVY, interlace-mode=(string)progressive, framerate=(fraction)30/1
/GstPipeline:pipeline0/Gstnvvconv:nvvconv0.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)UYVY, interlace-mode=(string)progressive, framerate=(fraction)30/1
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0.GstGhostPad:sink.GstProxyPad:proxypad0: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)UYVY, interlace-mode=(string)progressive, framerate=(fraction)30/1
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstTextOverlay:fps-display-text-overlay.GstPad:src: caps = video/x-raw(memory:NVMM, meta:GstVideoOverlayComposition), width=(int)1920, height=(int)1080, format=(string)UYVY, interlace-mode=(string)progressive, framerate=(fraction)30/1
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFakeSink:fakesink0.GstPad:sink: caps = video/x-raw(memory:NVMM, meta:GstVideoOverlayComposition), width=(int)1920, height=(int)1080, format=(string)UYVY, interlace-mode=(string)progressive, framerate=(fraction)30/1
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstTextOverlay:fps-display-text-overlay.GstPad:video_sink: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)UYVY, interlace-mode=(string)progressive, framerate=(fraction)30/1
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0.GstGhostPad:sink: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)UYVY, interlace-mode=(string)progressive, framerate=(fraction)30/1
/GstPipeline:pipeline0/Gstnvvconv:nvvconv0.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)UYVY, interlace-mode=(string)progressive, framerate=(fraction)30/1
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)UYVY, interlace-mode=(string)progressive, framerate=(fraction)30/1
WARNING: from element /GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFakeSink:fakesink0: Pipeline construction is invalid, please add queues.
Additional debug info:
gstbasesink.c(1209): gst_base_sink_query_latency (): /GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFakeSink:fakesink0:
Not enough buffering available for the processing deadline of 0:00:00.020000000, add enough queues to buffer 0:00:00.020000000 additional data. Shortening processing latency to 0:00:00.000000000.
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFakeSink:fakesink0: sync = true
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstTextOverlay:fps-display-text-overlay: text = rendered: 7, dropped: 0, current: 1.17, average: 1.17
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 7, dropped: 0, current: 1.17, average: 1.17
Got EOS from element “pipeline0”.
Execution ended after 0:00:06.237502208
Setting pipeline to NULL …
Freeing pipeline …

dmesg

[ 1377.436805] ar0233 2-0043: Index = 0x0005 , format = 0x59565955, width = 1920, height = 1080, frate num = 30
[ 1377.436814] ar0233 2-0043: Skipping Previous mode set …
[ 1377.520897] nvmap_alloc_handle: PID 3173: gst-launch-1.0: WARNING: All NvMap Allocations must have a tag to identify the subsystem allocating memory.Please pass the tag to the API call NvRmMemHanldeAllocAttr() or relevant.
[ 1377.571194] tegra-camrtc-capture-vi tegra-capture-vi: corr_err: discarding frame 0, flags: 0, err_data 131072
[ 1377.604560] tegra-camrtc-capture-vi tegra-capture-vi: corr_err: discarding frame 0, flags: 0, err_data 131072
[ 1377.637887] tegra-camrtc-capture-vi tegra-capture-vi: corr_err: discarding frame 0, flags: 0, err_data 131072
[ 1377.671233] tegra-camrtc-capture-vi tegra-capture-vi: corr_err: discarding frame 0, flags: 0, err_data 131072
[ 1377.704569] tegra-camrtc-capture-vi tegra-capture-vi: corr_err: discarding frame 0, flags: 0, err_data 131072
[ 1377.737903] tegra-camrtc-capture-vi tegra-capture-vi: corr_err: discarding frame 0, flags: 0, err_data 131072
[ 1383.574167] ar0233 2-0043: mcu_cam_stream_off 294 CAM Get CMD Stream off Success !!
[ 1383.574189] (NULL device *): vi_capture_control_message: NULL VI channel received
[ 1383.574393] t194-nvcsi 13e10000.host1x:nvcsi@15a00000: csi5_stream_close: Error in closing stream_id=0, csi_port=0

debug tracing log
debug-tracing-20230801-1618.log (533.3 KB)

There are many CHANSEL_NOMATCH error that could be incorrect virtual channel ID in PH cause it.
Do you use gmsl or SERIAL/DESERIAL?

@ShaneCCC

The bus is GMSL2, and there are SERDES between camera sensor and host.
I have three Deserializers, and each connects to one MIPI blank with CSI 4-lane, so I config vc-id from 0 to 3 for each sensor behind one Deserializer.

Shouldn’t output any others vcid while launch only one camera.

@ShaneCCC

Then, what do you suggest me?

@ShaneCCC
Do I need to modify the device tree?
Here is my device tree.
ar0233.dtsi (36.1 KB)

It doesn’t matter with device tree it could be the GMSL/SERDES driver problem.

Hi @ShaneCCC

I am looking for the SERDES driver for the trouble shooting about the CHANSEL_NOMATCH error, but my question is to know how to balance the CPU loading when multiple camera have video capture at the same time.
Can you give me some ideas?

Sorry I don’t understand what your request.
For me, fist need to clarify if the lag is due to network or it lag already in local.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.