Hardware: TX2
L4T: 32.7.4
Image sensor: OS08A10, 4k@30FPS.
Problem:
Single camera streaming using gstreamer works fine, but 3 cameras simultaneously would cause an immediate freeze with this simple command line
gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! nvoverlaysink nvarguscamerasrc sensor-id=1 ! fakesink nvarguscamerasrc sensor-id=2 ! fakesink
Here’s the output from nvargus-daemon which I launched with infinite timeout enabled.
NvViErrorDecode CaptureError: CsimuxFrameError (2)
NvViErrorDecode See https://wiki.nvidia.com/wmpwiki/index.php/Camera_Debugging/CaptureError_debugging for more information and links to documents.
CsimuxFrameError_Regular : 0x000000a4
Stream ID [ 2: 0]: 4
VPR state from fuse block [ 3]: 0
Frame end (FE) [ 5]: 1
A frame end has been found on a regular mode stream.
FS_FAULT [ 7]: 1
A FS packet was found for a virtual channel that was already in frame. An errored FE packet was injected before FS was allowed through.
captureErrorCallback Stream 4.0 capture 4 failed: ts 317715960224 frame 3 error 2 data 0x000000a4
Additional information:
isp/vi/nvcsi clocks are all boosted to max
If use v4l2 streaming, by pass the ISP, I can stream 3 cameras at same time without problem.
v4l2-ctl -d /dev/video0 -c bypass_mode=0 --stream-mmap
Device tree file:
mode0 { // OS08A10_MODE_3840X2160
status = "okay";
mclk_khz = "24000";
num_lanes = "4";
tegra_sinterface = "serial_a";
phy_mode = "DPHY";
discontinuous_clk = "no";
dpcm_enable = "false";
cil_settletime = "0";
active_w = "3840";
active_h = "2160";
mode_type = "bayer";
pixel_phase = "bggr";
csi_pixel_bit_depth = "12";
readout_orientation = "90";
line_length = "3848";
inherent_gain = "1";
mclk_multiplier = "25";
pix_clk_hz = "756000000";
gain_factor = "10";
min_gain_val = "10";
max_gain_val = "155";
step_gain_val = "1";
default_gain = "150";
min_hdr_ratio = "1";
max_hdr_ratio = "1";
framerate_factor = "1000000";
min_framerate = "1816577";/*1.816577 */
max_framerate = "30000000";/*30*/
step_framerate = "1";
default_framerate = "30000000";
exposure_factor = "1000000";
min_exp_time = "115";/* us */
max_exp_time = "33218";/* us */
step_exp_time = "1";
default_exp_time = "33217";/* us */
embedded_metadata_height = "1";
};
So I assume the hardware part up to the CSI are working properly? What else should I try?
Thanks for your help.
One observation:
Instead of starting 3 cameras in the one command line, if have 3 separate commands to launch each, it would start up fine.
Please help to verify by multiple session mode of argus_camera.
Thanks
hwang4
June 11, 2024, 12:31am
5
I am using customized rootfs, so to try out argus camera, I programmed another SOM with Jetpack 4.6.4, the argus_camera application is running fine, but so it’s the gst-launch-1.0 command line. So it appears there’s some kind of unit to unit variations.
I also noticed such errors when I run v4l2 multi-camera streaming
the dmesg log
[21457.317140] tegra-vi4 15700000.vi: PXL_SOF syncpt timeout! err = -11
[21457.323810] tegra-vi4 15700000.vi: tegra_channel_error_recovery: attempting to reset the capture channel
In the kernel trace:
v4l2-ctl-6292 [003] .... 21457.012428: tegra_channel_open: vi-output, os08a10 12-0021
vi-output, os08-6287 [004] .... 21457.019562: tegra_channel_capture_frame: sof:21456.906480480
v4l2-ctl-6292 [003] .... 21457.027367: tegra_channel_set_power: os08a10 12-0021 : 0x1
v4l2-ctl-6292 [003] .... 21457.027376: camera_common_s_power: status : 0x1
v4l2-ctl-6292 [000] .... 21457.032814: tegra_channel_set_power: 150c0000.nvcsi--1 : 0x1
v4l2-ctl-6292 [000] .... 21457.032817: csi_s_power: enable : 0x1
v4l2-ctl-6292 [003] .... 21457.032954: tegra_channel_capture_setup: vnc_id 1 W 3840 H 2160 fmt 20
vi-output, os08-6293 [004] .... 21457.033400: tegra_channel_set_stream: enable : 0x1
vi-output, os08-6293 [000] .... 21457.037936: tegra_channel_set_stream: 150c0000.nvcsi--1 : 0x1
vi-output, os08-6293 [000] .... 21457.037938: csi_s_stream: enable : 0x1
vi-output, os08-6293 [000] .... 21457.037973: tegra_channel_set_stream: os08a10 12-0021 : 0x1
vi-output, os08-6287 [004] .... 21457.052838: tegra_channel_capture_frame: sof:21456.939776096
kworker/5:3-6283 [005] .... 21457.061027: rtcpu_vinotify_event: tstamp:670814061449 tag:CHANSEL_PXL_EOF channel:0x00 frame:1075 vi_tstamp:670814061027 data:0x086f0002
kworker/5:3-6283 [005] .... 21457.061029: rtcpu_vinotify_event: tstamp:670814112913 tag:ATOMP_FE channel:0x00 frame:1075 vi_tstamp:670814112476 data:0x00000000
kworker/5:3-6283 [005] .... 21457.061029: rtcpu_vinotify_event: tstamp:670814130953 tag:CHANSEL_PXL_SOF channel:0x00 frame:1076 vi_tstamp:670814130474 data:0x00000001
kworker/5:3-6283 [005] .... 21457.061030: rtcpu_vinotify_event: tstamp:670814131116 tag:ATOMP_FS channel:0x00 frame:1076 vi_tstamp:670814130476 data:0x00000000
kworker/5:3-6283 [005] .... 21457.061030: rtcpu_vinotify_event: tstamp:670814133913 tag:CHANSEL_LOAD_FRAMED channel:0x01 frame:1076 vi_tstamp:670814133555 data:0x08000000
kworker/5:3-6283 [005] .... 21457.061032: rtos_queue_send_from_isr_failed: tstamp:670814548325 queue:0x0b4a7698
kworker/5:3-6283 [005] .... 21457.061033: rtos_queue_send_from_isr_failed: tstamp:670814548441 queue:0x0b4ab1a8
kworker/5:3-6283 [005] .... 21457.061034: rtos_queue_send_from_isr_failed: tstamp:670814548546 queue:0x0b4acdd8
kworker/5:3-6283 [005] .... 21457.061034: rtos_queue_send_from_isr_failed: tstamp:670814548650 queue:0x0b4ae958
kworker/5:3-6283 [005] .... 21457.061035: rtos_queue_send_from_isr_failed: tstamp:670814548751 queue:0x0b4af718
kworker/5:3-6283 [005] .... 21457.061036: rtos_queue_send_from_isr_failed: tstamp:670814548852 queue:0x0b4b04d8
kworker/5:3-6283 [005] .... 21457.061036: rtos_queue_send_from_isr_failed: tstamp:670814548957 queue:0x0b4b1298
kworker/5:3-6283 [005] .... 21457.061037: rtos_queue_send_from_isr_failed: tstamp:670814549058 queue:0x0b4b2058
kworker/5:3-6283 [005] .... 21457.061038: rtos_queue_send_failed: tstamp:670814549961 queue:0x0b4a7698
kworker/5:3-6283 [005] .... 21457.061039: rtcpu_vinotify_event: tstamp:670815101932 tag:CHANSEL_PXL_EOF channel:0x00 frame:1076 vi_tstamp:670815101525 data:0x086f0002
kworker/5:3-6283 [005] .... 21457.061040: rtcpu_vinotify_event: tstamp:670815153411 tag:ATOMP_FE channel:0x00 frame:1076 vi_tstamp:670815152974 data:0x00000000
kworker/5:3-6283 [005] .... 21457.061040: rtcpu_vinotify_event: tstamp:670815171457 tag:CHANSEL_PXL_SOF channel:0x00 frame:1077 vi_tstamp:670815170972 data:0x00000001
kworker/5:3-6283 [005] .... 21457.061041: rtcpu_vinotify_event: tstamp:670815171618 tag:ATOMP_FS channel:0x00 frame:1077 vi_tstamp:670815170974 data:0x00000000
kworker/5:3-6283 [005] .... 21457.061041: rtcpu_vinotify_event: tstamp:670815173698 tag:CHANSEL_LOAD_FRAMED channel:0x01 frame:1077 vi_tstamp:670815173345 data:0x08000000
vi-output, os08-6287 [004] .... 21457.086144: tegra_channel_capture_frame: sof:21456.973072032
vi-output, os08-6293 [003] .... 21457.116722: tegra_channel_capture_frame: sof:21457.3591936
kworker/5:3-6283 [005] .... 21457.117027: rtcpu_vinotify_event: tstamp:670816142426 tag:CHANSEL_PXL_EOF channel:0x00 frame:1077 vi_tstamp:670816142023 data:0x086f0002
kworker/5:3-6283 [005] .... 21457.117028: rtcpu_vinotify_event: tstamp:670816193906 tag:ATOMP_FE channel:0x00 frame:1077 vi_tstamp:670816193473 data:0x00000000
kworker/5:3-6283 [005] .... 21457.117029: rtcpu_vinotify_event: tstamp:670816211957 tag:CHANSEL_PXL_SOF channel:0x00 frame:1078 vi_tstamp:670816211470 data:0x00000001
kworker/5:3-6283 [005] .... 21457.117029: rtcpu_vinotify_event: tstamp:670816212116 tag:ATOMP_FS channel:0x00 frame:1078 vi_tstamp:670816211472 data:0x00000000
kworker/5:3-6283 [005] .... 21457.117030: rtcpu_vinotify_event: tstamp:670816214559 tag:CHANSEL_LOAD_FRAMED channel:0x01 frame:1078 vi_tstamp:670816214208 data:0x08000000
kworker/5:3-6283 [005] .... 21457.117031: rtcpu_vinotify_event: tstamp:670817165727 tag:CHANSEL_PXL_SOF channel:0x01 frame:1 vi_tstamp:670817165217 data:0x00000001
kworker/5:3-6283 [005] .... 21457.117032: rtcpu_vinotify_event: tstamp:670817165909 tag:ATOMP_FS channel:0x01 frame:1 vi_tstamp:670817165219 data:0x00000000
kworker/5:3-6283 [005] .... 21457.117032: rtcpu_vinotify_event: tstamp:670817170130 tag:CHANSEL_LOAD_FRAMED channel:0x10 frame:1 vi_tstamp:670817169777 data:0x18000000
vi-output, os08-6287 [004] .... 21457.119438: tegra_channel_capture_frame: sof:21457.6367968
vi-output, os08-6287 [004] .... 21457.152779: tegra_channel_capture_frame: sof:21457.39663936
kworker/5:3-6283 [005] .... 21457.173087: rtcpu_vinotify_event: tstamp:670817182916 tag:CHANSEL_PXL_EOF channel:0x00 frame:1078 vi_tstamp:670817182520 data:0x086f0002
kworker/5:3-6283 [005] .... 21457.173092: rtcpu_vinotify_event: tstamp:670817234410 tag:ATOMP_FE channel:0x00 frame:1078 vi_tstamp:670817233971 data:0x00000000
kworker/5:3-6283 [005] .... 21457.173094: rtcpu_vinotify_event: tstamp:670817245463 tag:CSIMUX_FRAME channel:0x00 frame:1 vi_tstamp:670817244785 data:0x000000a4
kworker/5:3-6283 [005] .... 21457.173096: rtcpu_vinotify_event: tstamp:670817245670 tag:CHANSEL_SHORT_FRAME channel:0x10 frame:1 vi_tstamp:670817244785 data:0x00000002
kworker/5:3-6283 [005] .... 21457.173099: rtcpu_vinotify_event: tstamp:670817245819 tag:ATOMP_FE channel:0x01 frame:1 vi_tstamp:670817244785 data:0x00000000
kworker/5:3-6283 [005] .... 21457.173101: rtcpu_vinotify_event: tstamp:670817252452 tag:CHANSEL_PXL_SOF channel:0x00 frame:1079 vi_tstamp:670817251968 data:0x00000001
kworker/5:3-6283 [005] .... 21457.173103: rtcpu_vinotify_event: tstamp:670817252625 tag:ATOMP_FS channel:0x00 frame:1079 vi_tstamp:670817251970 data:0x00000000
kworker/5:3-6283 [005] .... 21457.173105: rtcpu_vinotify_event: tstamp:670817255012 tag:CHANSEL_LOAD_FRAMED channel:0x01 frame:1079 vi_tstamp:670817254655 data:0x08000000
kworker/5:3-6283 [005] .... 21457.173109: rtos_queue_peek_from_isr_failed: tstamp:670817910356 queue:0x0b4b4940
kworker/5:3-6283 [005] .... 21457.173111: rtcpu_vinotify_event: tstamp:670818223423 tag:CHANSEL_PXL_EOF channel:0x00 frame:1079 vi_tstamp:670818223019 data:0x086f0002
kworker/5:3-6283 [005] .... 21457.173113: rtcpu_vinotify_event: tstamp:670818274915 tag:ATOMP_FE channel:0x00 frame:1079 vi_tstamp:670818274468 data:0x00000000
kworker/5:3-6283 [005] .... 21457.173115: rtcpu_vinotify_event: tstamp:670818292959 tag:CHANSEL_PXL_SOF channel:0x00 frame:1080 vi_tstamp:670818292467 data:0x00000001
kworker/5:3-6283 [005] .... 21457.173117: rtcpu_vinotify_event: tstamp:670818293128 tag:ATOMP_FS channel:0x00 frame:1080 vi_tstamp:670818292468 data:0x00000000
kworker/5:3-6283 [005] .... 21457.173119: rtcpu_vinotify_event: tstamp:670818297098 tag:CHANSEL_LOAD_FRAMED channel:0x01 frame:1080 vi_tstamp:670818296735 data:0x08000000
vi-output, os08-6287 [004] .... 21457.186106: tegra_channel_capture_frame: sof:21457.72959840
vi-output, os08-6287 [004] .... 21457.219397: tegra_channel_capture_frame: sof:21457.106255776
kworker/5:3-6283 [005] .... 21457.229096: rtcpu_vinotify_event: tstamp:670819263946 tag:CHANSEL_PXL_EOF channel:0x00 frame:1080 vi_tstamp:670819263517 data:0x086f0002
kworker/5:3-6283 [005] .... 21457.229102: rtcpu_vinotify_event: tstamp:670819315408 tag:ATOMP_FE channel:0x00 frame:1080 vi_tstamp:670819314966 data:0x00000000
kworker/5:3-6283 [005] .... 21457.229104: rtcpu_vinotify_event: tstamp:670819333461 tag:CHANSEL_PXL_SOF channel:0x00 frame:1081 vi_tstamp:670819332964 data:0x00000001
kworker/5:3-6283 [005] .... 21457.229106: rtcpu_vinotify_event: tstamp:670819333631 tag:ATOMP_FS channel:0x00 frame:1081 vi_tstamp:670819332966 data:0x00000000
kworker/5:3-6283 [005] .... 21457.229108: rtcpu_vinotify_event: tstamp:670819338540 tag:CHANSEL_LOAD_FRAMED channel:0x01 frame:1081 vi_tstamp:670819338183 data:0x08000000
kworker/5:3-6283 [005] .... 21457.229110: rtcpu_vinotify_event: tstamp:670820304425 tag:CHANSEL_PXL_EOF channel:0x00 frame:1081 vi_tstamp:670820304015 data:0x086f0002
kworker/5:3-6283 [005] .... 21457.229112: rtcpu_vinotify_event: tstamp:670820355905 tag:ATOMP_FE channel:0x00 frame:1081 vi_tstamp:670820355464 data:0x00000000
kworker/5:3-6283 [005] .... 21457.229114: rtcpu_vinotify_event: tstamp:670820373950 tag:CHANSEL_PXL_SOF channel:0x00 frame:1082 vi_tstamp:670820373462 data:0x00000001
kworker/5:3-6283 [005] .... 21457.229116: rtcpu_vinotify_event: tstamp:670820374121 tag:ATOMP_FS channel:0x00 frame:1082 vi_tstamp:670820373464 data:0x00000000
kworker/5:3-6283 [005] .... 21457.229121: rtcpu_vinotify_event: tstamp:670820378882 tag:CHANSEL_LOAD_FRAMED channel:0x01 frame:1082 vi_tstamp:670820378526 data:0x08000000
vi-output, os08-6287 [004] .... 21457.252688: tegra_channel_capture_frame: sof:21457.139551712
kworker/5:3-6283 [005] .... 21457.285082: rtcpu_vinotify_event: tstamp:670821344936 tag:CHANSEL_PXL_EOF channel:0x00 frame:1082 vi_tstamp:670821344513 data:0x086f0002
kworker/5:3-6283 [005] .... 21457.285086: rtcpu_vinotify_event: tstamp:670821396402 tag:ATOMP_FE channel:0x00 frame:1082 vi_tstamp:670821395962 data:0x00000000
kworker/5:3-6283 [005] .... 21457.285088: rtcpu_vinotify_event: tstamp:670821414449 tag:CHANSEL_PXL_SOF channel:0x00 frame:1083 vi_tstamp:670821413960 data:0x00000001
kworker/5:3-6283 [005] .... 21457.285090: rtcpu_vinotify_event: tstamp:670821414622 tag:ATOMP_FS channel:0x00 frame:1083 vi_tstamp:670821413962 data:0x00000000
kworker/5:3-6283 [005] .... 21457.285093: rtcpu_vinotify_event: tstamp:670821419209 tag:CHANSEL_LOAD_FRAMED channel:0x01 frame:1083 vi_tstamp:670821418845 data:0x08000000
kworker/5:3-6283 [005] .... 21457.285095: rtcpu_vinotify_event: tstamp:670822385420 tag:CHANSEL_PXL_EOF channel:0x00 frame:1083 vi_tstamp:670822385010 data:0x086f0002
vi-output, os08-6287 [003] .... 21457.285988: tegra_channel_capture_frame: sof:21457.172847648
vi-output, os08-6287 [004] .... 21457.319284: tegra_channel_capture_frame: sof:21457.206143584
vi-output, os08-6293 [004] .... 21457.334049: tegra_channel_capture_setup: vnc_id 1 W 3840 H 2160 fmt 20
vi-output, os08-6293 [004] .... 21457.334082: tegra_channel_capture_frame: sof:21457.3591936
Have reference to below topic to download the source code e to increase the timeout to try.
We have a system where I launch 6 gstreamer pipelines for 6 cameras with nvarguscamerasrc. The pipleines are started programmatically in a loop.
This was working with R32.4.3, but with R32.5.0, only 5 out of the 6 pipelines start correctly.
The following errors are output on stdout/stderr:
nvbuf_utils: dmabuf_fd -1 mapped entry NOT found
nvbuf_utils: Can not get HW buffer from FD… Exiting…
This can be reproduced with a simple example from the command line:
gst-launch-1.0
nvarguscamera…
hwang4
June 11, 2024, 5:20pm
7
Thanks for your help, I tried larger timeout values (6s/9s respectively), it appears it did help to reduce the freeze a lot, even though it didn’t completely get rid of it. So it’s a step in the right direction.
However, even though gst command launches ok, if I run my python script to construct a complex deepstream pipeline using all three cameras, I still see the video freeze, which I suspect still related to the timing, and further increase those timeouts to 10/15 didn’t help either.
Interestingly, when such freeze happens, I didn’t see error in dmesg kernel log I used to see, but I still see such errors in nvargus-daemon when I use infinite timeout.
NvViErrorDecode CaptureError: CsimuxFrameError (2)
NvViErrorDecode See https://wiki.nvidia.com/wmpwiki/index.php/Camera_Debugging/CaptureError_debugging for more information and links to documents.
CsimuxFrameError_Regular : 0x000000a4
Stream ID [ 2: 0]: 4
VPR state from fuse block [ 3]: 0
Frame end (FE) [ 5]: 1
A frame end has been found on a regular mode stream.
FS_FAULT [ 7]: 1
A FS packet was found for a virtual channel that was already in frame. An errored FE packet was injected before FS was allowed through.
captureErrorCallback Stream 4.0 capture 3 failed: ts 122488266464 frame 5 error 2 data 0x000000a4
Run below command to try.
sudo service nvargus-daemon stop
sudo enableCamInfiniteTimeout=1 nvargus-daemon
hwang4
June 13, 2024, 4:56pm
9
The previous error I was showing is already using nvargus-daemon with infinite timeout .
Other observations:
I increased timeout in vi4_ops.c as shown in other posts. It appears to help get rid of the PXL_SOF timeout when using v4l2 streaming.
staggered gst-launch appears running fine:
gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! fakesink & sleep 1; gst-launch-1.0 nvarguscamerasrc sensor-id=1 ! nvoverlaysink & sleep 2; gst-launch-1.0 nvarguscamerasrc sensor-id=2 ! fakesink &
If I remove the sleep, it would always immediately freeze with FS_FAULT and FE error. I would like to understand, from argus-daemon perspective, why a delay would cause such difference?
hwang4
August 28, 2024, 4:37pm
12
I am still experiencing the issue, --frequent Argus error.
Now I am wondering if this is related to my other problem. Frames dropped with high VIC load
system
Closed
October 9, 2024, 2:56am
14
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.