Argus errors on some boots

Hi,

I’m currently investigating on an issue we face every once in a while.
The main setup is a Jetson TX2-NX, Jetpack 4.6, a custom carrier board, and three cameras connected via CSI.

Two of the cameras are accessed via video4linux. The other camera (IMX415) is accessed via a GStreamer pipeline and nvarguscamerasrc, which sometimes seem causes errors. Example:

gst-launch-1.0 nvarguscamerasrc sensor-id=2 ! "video/x-raw(memory:NVMM),format=NV12,width=1920,height=1080,framerate=25/1" ! nvvidconv ! fpsdisplaysink video-sink=fakesink -v

Every once in a while, an error occurs when the pipeline is started after booting up the device (occurrence is around 3-5%). The error can be resolved by restarting the nvargus-dameon with systemctl restart nvargus-daemon.service.

Has anyone an idea what causes this error and how it can be avoided?

What I have already investigated:

  • I dumped the camera registers in the error case and compared them to the normal case. They don’t seem to be different. Therefore, it should not be an issue with the camera configuration.
  • I have measured the CSI signals with an oscilloscope, since the nvargus-log includes some CSI errors. Unfortunately, I don’t have hardware to decode the signals. Therefore, I can only tell that something is sent via CSI in both cases, but not if the data is valid.
  • I have compared the boot-logs, and they don’t show any issues in the error case.

GStreamer output in the error case, with the above-mentioned pipeline and debug flags for nvargusscamerasrc enabled:

Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFakeSink:fakesink0: sync = true
0:00:01.117092837 5166  0x5570d124a0 DEBUG     nvarguscamerasrc gstnvarguscamerasrc.cpp:1468:gst_nv_argus_camera_set_caps:<nvarguscamerasrc0> Received caps video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)25/1
Setting pipeline to PLAYING ...
New clock: GstSystemClock
/GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc0.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)25/1
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)25/1
/GstPipeline:pipeline0/Gstnvvconv:nvvconv0.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)25/1
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0.GstGhostPad:sink.GstProxyPad:proxypad0: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)25/1
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstTextOverlay:fps-display-text-overlay.GstPad:src: caps = video/x-raw(memory:NVMM, meta:GstVideoOverlayComposition), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)25/1
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFakeSink:fakesink0.GstPad:sink: caps = video/x-raw(memory:NVMM, meta:GstVideoOverlayComposition), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)25/1
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstTextOverlay:fps-display-text-overlay.GstPad:video_sink: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)25/1
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0.GstGhostPad:sink: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)25/1
/GstPipeline:pipeline0/Gstnvvconv:nvvconv0.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)25/1
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)25/1
0:00:05.126719265 5166   0x5570d1a190 DEBUG    nvarguscamerasrc gstnvarguscamerasrc.cpp:1796:consumer_thread:<nvarguscamerasrc0>consumer_thread: stop_requested=1

GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3840 x 2160 FR = 31.700001 fps Duration = 31545740 ; Analog Gain range min 1.000000, max 3981.070801; Exposure Range min 1000, max 1000000000;

GST_ARGUS: 3840 x 2160 FR = 31.700001 fps Duration = 31545740 ; Analog Gain range min 1.000000, max 3981.070801; Exposure Range min 1000, max 1000000000;

GST_ARGUS: Running with following settings:
   Camera index = 2 
   Camera mode  = 1 
   Output Stream W = 3840 H = 2160 
   seconds to Run    = 0 
   Frame Rate = 31.700001 
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
nvbuf_utils: dmabuf_fd -1 mapped entry NOT found
nvbuf_utils: Can not get HW buffer from FD... Exiting...
CONSUMER: Done Success
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFakeSink:fakesink0: sync = true
Got EOS from element "pipeline0".
Execution ended after 0:00:04.010506895
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
(Argus) Error Timeout:  (propagating from src/rpc/socket/client/ClientSocketManager.cpp, function send(), line 137)
(Argus) Error Timeout:  (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function dispatch(), line 91)
0:01:21.765592050  5166   0x5570d1a140 DEBUG    nvarguscamerasrc gstnvarguscamerasrc.cpp:1667:argus_thread:<nvarguscamerasrc0> argus_thread: stop_requested=1

GST_ARGUS: Cleaning up
GST_ARGUS: Done Success
Setting pipeline to NULL ...
Freeing pipeline ...
0:01:21.767361779 5166   0x5570b7caa0 DEBUG     nvarguscamerasrc gstnvarguscamerasrc.cpp:2201:gst_nv_argus_camera_src_finalize:<nvarguscamerasrc0> finalize
(Argus) Error Timeout:  (propagating from src/rpc/socket/client/ClientSocketManager.cpp, function send(), line 137)
(Argus) Error Timeout:  (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function dispatch(), line 91)
(Argus) Error InvalidState: Argus client is exiting with 2 outstanding client threads (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 357)

Output of jounalctl -u nvargus-daemon.service in the error case:

Mar 14 16:01:19 jetson-tx2 systemd[1]: Started Argus daemon.
Mar 14 16:01:22 jetson-tx2 nvargus-daemon[4425]: === NVIDIA Libargus Camera Service (0.98.3)=== Listening for connections...=== gst-launch-1.0[5168]: Connection established (7FA670F1D0)OFParserListModules: module list: /proc/device-tree/tegra-camera-platform/modules
Mar 14 16:01:22 jetson-tx2 nvargus-daemon[4425]: OFParserListModules: module list: /proc/device-tree/tegra-camera-platform/modules/module1
Mar 14 16:01:22 jetson-tx2 nvargus-daemon[4425]: OFParserListModules: module list: /proc/device-tree/tegra-camera-platform/modules/module2
Mar 14 16:01:22 jetson-tx2 nvargus-daemon[4425]: NvPclHwGetModuleList: WARNING: Could not map module to ISP config string
Mar 14 16:01:22 jetson-tx2 nvargus-daemon[4425]: NvPclHwGetModuleList: No module data found
Mar 14 16:01:22 jetson-tx2 nvargus-daemon[4425]: NvPclHwGetModuleList: WARNING: Could not map module to ISP config string
Mar 14 16:01:22 jetson-tx2 nvargus-daemon[4425]: NvPclHwGetModuleList: No module data found
Mar 14 16:01:22 jetson-tx2 nvargus-daemon[4425]: NvPclHwGetModuleList: WARNING: Could not map module to ISP config string
Mar 14 16:01:22 jetson-tx2 nvargus-daemon[4425]: NvPclHwGetModuleList: No module data found
Mar 14 16:01:22 jetson-tx2 nvargus-daemon[4425]: OFParserGetVirtualDevice: NVIDIA Camera virtual enumerator not found in proc device-tree
Mar 14 16:01:22 jetson-tx2 nvargus-daemon[4425]: ---- imager: No override file found. ----
Mar 14 16:01:22 jetson-tx2 nvargus-daemon[4425]: LSC: LSC surface is not based on full res!
Mar 14 16:01:22 jetson-tx2 nvargus-daemon[4425]: ---- imager: No override file found. ----
Mar 14 16:01:22 jetson-tx2 nvargus-daemon[4425]: LSC: LSC surface is not based on full res!
Mar 14 16:01:22 jetson-tx2 nvargus-daemon[4425]: ---- imager: Found override file [/var/nvidia/nvcam/settings/sla_center_imx415.isp]. ----
Mar 14 16:01:24 jetson-tx2 nvargus-daemon[4425]: CAM: serial no file already exists, skips storing again=== gst-launch-1.0[5168]: CameraProvider initialized (0x7fa0ca22d0)CAM: serial no file already exists, skips storing againSCF: Error Timeout: ISP port 0 timed out
Mar 14 16:01:24 jetson-tx2 nvargus-daemon[4425]: Error: waitCsiFrameStart timeout guid 2
Mar 14 16:01:24 jetson-tx2 nvargus-daemon[4425]: VI Stream Id = 4 Virtual Channel = 0
Mar 14 16:01:24 jetson-tx2 nvargus-daemon[4425]: ************VI Debug Registers**********
Mar 14 16:01:24 jetson-tx2 nvargus-daemon[4425]: VI_CSIMUX_STAT_FRAME_16         = 0x00000000
Mar 14 16:01:24 jetson-tx2 nvargus-daemon[4425]: VI_CSIMUX_FRAME_STATUS_0         = 0x00000000
Mar 14 16:01:24 jetson-tx2 nvargus-daemon[4425]: VI_CFG_INTERRUPT_STATUS_0         = 0x3f000000
Mar 14 16:01:24 jetson-tx2 nvargus-daemon[4425]: VI_ISPBUFA_ERROR_0         = 0x00000000
Mar 14 16:01:24 jetson-tx2 nvargus-daemon[4425]: VI_FMLITE_ERROR_0         = 0x00000000
Mar 14 16:01:24 jetson-tx2 nvargus-daemon[4425]: VI_NOTIFY_ERROR_0         = 0x00000000
Mar 14 16:01:24 jetson-tx2 nvargus-daemon[4425]: *****************************************
Mar 14 16:01:24 jetson-tx2 nvargus-daemon[4425]: CSI Stream Id = 4 Brick Id = 2
Mar 14 16:01:24 jetson-tx2 nvargus-daemon[4425]: ************CSI Debug Registers**********
Mar 14 16:01:24 jetson-tx2 nvargus-daemon[4425]: CILA_INTR_STATUS_CILA[0x30400]         = 0x080001d9
Mar 14 16:01:24 jetson-tx2 nvargus-daemon[4425]: CILB_INTR_STATUS_CILB[0x30c00]         = 0x00000000
Mar 14 16:01:24 jetson-tx2 nvargus-daemon[4425]: INTR_STATUS[0x300a4]         = 0x00000000
Mar 14 16:01:24 jetson-tx2 nvargus-daemon[4425]: ERR_INTR_STATUS[0x300ac]         = 0x00000000
Mar 14 16:01:24 jetson-tx2 nvargus-daemon[4425]: ERROR_STATUS2VI_VC0[0x30094]         = 0x00000000
Mar 14 16:01:24 jetson-tx2 nvargus-daemon[4425]: ERROR_STATUS2VI_VC1[0x30098]         = 0x00000000
Mar 14 16:01:24 jetson-tx2 nvargus-daemon[4425]: ERROR_STATUS2VI_VC2[0x3009c]         = 0x00000000
Mar 14 16:01:24 jetson-tx2 nvargus-daemon[4425]: ERROR_STATUS2VI_VC3[0x300a0]         = 0x00000000
Mar 14 16:01:24 jetson-tx2 nvargus-daemon[4425]: *****************************************
Mar 14 16:01:24 jetson-tx2 nvargus-daemon[4425]: SCF: Error BadValue: timestamp cannot be 0 (in src/services/capture/NvViCsiHw.cpp, function waitCsiFrameStart(), line 637)
Mar 14 16:01:24 jetson-tx2 nvargus-daemon[4425]: SCF: Error BadValue:  (propagating from src/common/Utils.cpp, function workerThread(), line 116)
Mar 14 16:01:24 jetson-tx2 nvargus-daemon[4425]: SCF: Error BadValue: Worker thread ViCsiHw frameStart failed (in src/common/Utils.cpp, function workerThread(), line 133)
Mar 14 16:01:31 jetson-tx2 nvargus-daemon[4425]: SCF: Error Timeout:  (propagating from src/services/capture/CaptureServiceEvent.cpp, function wait(), line 59)
Mar 14 16:01:31 jetson-tx2 nvargus-daemon[4425]: Error: Camera HwEvents wait, this may indicate a hardware timeout occured,abort current/incoming cc
Mar 14 16:01:33 jetson-tx2 nvargus-daemon[4425]: SCF: Error Timeout: ISP Stats timed out! (in src/services/capture/NvIspHw.cpp, function waitIspStatsFinished(), line 566)
Mar 14 16:01:35 jetson-tx2 nvargus-daemon[4425]: SCF: Error Timeout: ISP port 0 timed out! (in src/services/capture/NvIspHw.cpp, function waitIspFrameEnd(), line 478)
Mar 14 16:01:35 jetson-tx2 nvargus-daemon[4425]: SCF: Error Timeout:  (propagating from src/services/capture/CaptureServiceDeviceIsp.cpp, function waitCompletion(), line 423)
Mar 14 16:01:35 jetson-tx2 nvargus-daemon[4425]: SCF: Error Timeout:  (propagating from src/services/capture/CaptureServiceDevice.cpp, function pause(), line 949)
Mar 14 16:01:35 jetson-tx2 nvargus-daemon[4425]: SCF: Error Timeout: During capture abort, syncpoint wait timeout waiting for current frame to finish (in src/services/capture/CaptureServiceDevice.cpp, function handleCancelSourceRequests(), line 1032)
Mar 14 16:01:35 jetson-tx2 nvargus-daemon[4425]: SCF: Error Timeout: ISP Stats timed out! (in src/services/capture/NvIspHw.cpp, function waitIspStatsFinished(), line 566)
Mar 14 16:01:35 jetson-tx2 nvargus-daemon[4425]: SCF: Error Timeout: Sending critical error event (in src/api/Session.cpp, function sendErrorEvent(), line 997)
Mar 14 16:01:35 jetson-tx2 nvargus-daemon[4425]: SCF: Error BadParameter: CC has already been disposed (in src/components/CaptureContainerManager.cpp, function dispose(), line 161)
Mar 14 16:01:36 jetson-tx2 nvargus-daemon[4425]: SCF: Error Timeout: ISP port 1 timed out! (in src/services/capture/NvIspHw.cpp, function waitIspFrameEnd(), line 501)
Mar 14 16:01:38 jetson-tx2 nvargus-daemon[4425]: SCF: Error Timeout: ISP port 2 timed out! (in src/services/capture/NvIspHw.cpp, function waitIspFrameEnd(), line 512)
Mar 14 16:01:38 jetson-tx2 nvargus-daemon[4425]: SCF: Error Timeout:  (propagating from src/services/capture/NvIspHw.cpp, function waitIspFrameEnd(), line 524)
Mar 14 16:01:38 jetson-tx2 nvargus-daemon[4425]: SCF: Error Timeout:  (propagating from src/common/Utils.cpp, function workerThread(), line 116)
Mar 14 16:01:38 jetson-tx2 nvargus-daemon[4425]: SCF: Error Timeout: Worker thread IspHw frameComplete failed (in src/common/Utils.cpp, function workerThread(), line 133)
Mar 14 16:01:38 jetson-tx2 nvargus-daemon[4425]: SCF: Error Timeout: NvRmSyncWait failed (in src/api/Buffer.cpp, function cpuWaitFences(), line 621)
Mar 14 16:01:38 jetson-tx2 nvargus-daemon[4425]: SCF: Error Timeout:  (propagating from src/api/Buffer.cpp, function cpuWaitInputFences(), line 542)
Mar 14 16:01:38 jetson-tx2 nvargus-daemon[4425]: SCF: Error Timeout:  (propagating from src/api/Buffer.cpp, function acquire(), line 679)
Mar 14 16:01:38 jetson-tx2 nvargus-daemon[4425]: SCF: Error Timeout:  (propagating from src/api/Buffer.cpp, function ScopedBufferLock(), line 656)

hello haensler,

there shows hardware timeout for the failure use-case,
for example,

could you please narrow down the issue to evaluate the camera functionality.
for example, is it works normally by using v4l standard control?
please also check whether there is a delay between capture events and 1st start-of-frame has arrived.

Hi,

thank you for your reply.

This is quite hard to answer. I need to test whether the error exists or not by running the gstreamer pipeline. The pipeline does not seem to close the camera correctly in the error case. Trying to capture after that with v4l will always give me a “resource busy”-error. I can restart the nvargus-deamon to avoid this, but this anyway resolves the whole problem and is therefore not meaningful.

However, what I can tell, is that I never saw the issue when just capturing with v4l. I’ve done >200 boots, checked the framerate of the capturing with the following command, and never saw any errors.

v4l2-ctl -d /dev/video2 --stream-mm

Any suggestions on how to check this?
Sorry, but I’m either not understanding what you mean or I have no idea how to check this.

Thanks in advance!
Greetings

hello haensler,

you cannot access to the same video stream at the same time with different apps. and, it looks like an unlock handler on Argus side for the failure case, that’s why resource busy error reported when you trying to access with v4l pipeline.

here’s another v4l sample pipeline for test camera stream, it has some commands added, for example, to specify the sensor mode, to specify capture counts.
$ v4l2-ctl -d /dev/video0 --set-fmt-video=width=1920,height=1080,pixelformat=RG10 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=100

assume you’re able to access the camera stream without failures, then, there might be issues with sensor device tree. you may review the clock settings, such as Sensor Pixel Clock.

normally, you should probe the MIPI signal to evaluate the latency between capture events and 1st start-of-frame has arrived.
or… you may follow below steps to enable VI tracing logs.

echo 1 > /sys/kernel/debug/tracing/tracing_on
echo 30720 > /sys/kernel/debug/tracing/buffer_size_kb
echo 1 > /sys/kernel/debug/tracing/events/tegra_rtcpu/enable
echo 1 > /sys/kernel/debug/tracing/events/freertos/enable
echo 2 > /sys/kernel/debug/camrtc/log-level
echo > /sys/kernel/debug/tracing/trace
cat /sys/kernel/debug/tracing/trace

furthermore, there’s device tree property, set_mode_delay_ms, which is able to configure the maximum wait time for the first frame after capture starts, the unit is in milliseconds.

Hi,

Thank you a lot for the detailed description!

I run a lot of tests with this pipeline. I always saw around 60 FPS and never ran into crashes. However, I saw derivations on some boots, which may explain why the GStreamer pipeline crashes.

In some cases, the pipeline needs around 5 seconds to start capturing. I inspected dmesg and saw that the v4l pipeline is waiting for SOF, never gets it, and after five seconds attempts to reset the capture channel. This reset recovers the error and the pipeline starts capturing normally.

[   10.377508] vc_mipi 32-001a: vc_sen_start_stream(): Start streaming
[   15.696598] tegra-vi4 15700000.vi: PXL_SOF syncpt timeout! err = -11
[   15.702997] tegra-vi4 15700000.vi: tegra_channel_error_recovery: attempting to reset the capture channel
[   17.936486] vc_mipi 32-001a: vc_sen_stop_stream(): Stop streaming

This auto-reset may be the reason why the v4l-pipeline is always able to finally capture, while the gstreamer-pipeline just crashes.
However, this still not answers why this reset is required on some boots. By the way, my recent measurements show that this occurs in ~3% of the boots.


Having a closer look at those traces tells me a similar story:

  • In the non-error case I get the first SOF after ~0.5s
     kworker/2:3-1907  [002] ....     9.957607: rtos_queue_peek_from_isr_failed: tstamp:484282832 queue:0x0b4b4940
     kworker/2:3-1907  [002] ....     9.957614: rtos_queue_send_from_isr_failed: tstamp:484521382 queue:0x0b4a7698
     kworker/2:3-1907  [002] ....     9.957616: rtos_queue_send_from_isr_failed: tstamp:484521491 queue:0x0b4ab1a8
     kworker/2:3-1907  [002] ....     9.957618: rtos_queue_send_from_isr_failed: tstamp:484521598 queue:0x0b4acdd8
     kworker/2:3-1907  [002] ....     9.957619: rtos_queue_send_from_isr_failed: tstamp:484521706 queue:0x0b4af718
     kworker/2:3-1907  [002] ....     9.957621: rtos_queue_send_from_isr_failed: tstamp:484521810 queue:0x0b4b04d8
     kworker/2:3-1907  [002] ....     9.957622: rtos_queue_send_from_isr_failed: tstamp:484521915 queue:0x0b4b1298
     kworker/2:3-1907  [002] ....     9.957642: rtos_queue_send_from_isr_failed: tstamp:484522019 queue:0x0b4b2058
     kworker/2:3-1907  [002] ....     9.957644: rtos_queue_send_failed: tstamp:484522467 queue:0x0b4a7698
     kworker/2:3-1907  [002] ....     9.957646: rtos_queue_send_from_isr_failed: tstamp:484527047 queue:0x0b4a7698
     kworker/2:3-1907  [002] ....     9.957647: rtos_queue_send_from_isr_failed: tstamp:484527153 queue:0x0b4ab1a8
     kworker/2:3-1907  [002] ....     9.957649: rtos_queue_send_from_isr_failed: tstamp:484527262 queue:0x0b4acdd8
     kworker/2:3-1907  [002] ....     9.957650: rtos_queue_send_from_isr_failed: tstamp:484527370 queue:0x0b4af718
     kworker/2:3-1907  [002] ....     9.957651: rtos_queue_send_from_isr_failed: tstamp:484527475 queue:0x0b4b04d8
     kworker/2:3-1907  [002] ....     9.957653: rtos_queue_send_from_isr_failed: tstamp:484527580 queue:0x0b4b1298
     kworker/2:3-1907  [002] ....     9.957654: rtos_queue_send_from_isr_failed: tstamp:484527685 queue:0x0b4b2058
     kworker/2:3-1907  [002] ....     9.957656: rtos_queue_send_failed: tstamp:484528628 queue:0x0b4a7698
     kworker/2:0-24    [002] ....    10.125629: rtos_queue_peek_from_isr_failed: tstamp:489282832 queue:0x0b4b4940
     kworker/2:0-24    [002] ....    10.293619: rtos_queue_peek_from_isr_failed: tstamp:494282815 queue:0x0b4b4940
     kworker/2:0-24    [002] ....    10.405661: rtos_queue_peek_from_isr_failed: tstamp:499282808 queue:0x0b4b4940
     kworker/2:0-24    [002] ....    10.573663: rtos_queue_peek_from_isr_failed: tstamp:504282803 queue:0x0b4b4940
     kworker/2:0-24    [002] ....    10.629673: rtcpu_vinotify_event: tstamp:505944015 tag:ATOMP_FS channel:0x00 frame:12 vi_tstamp:505943623 data:0x00000000
     kworker/2:0-24    [002] ....    10.629675: rtcpu_vinotify_event: tstamp:505960190 tag:CHANSEL_PXL_SOF channel:0x00 frame:12 vi_tstamp:505959812 data:0x00000001
     kworker/2:0-24    [002] ....    10.629693: rtcpu_vinotify_event: tstamp:505963200 tag:CHANSEL_LOAD_FRAMED channel:0x10 frame:12 vi_tstamp:505962825 data:0x08000000
     kworker/2:0-24    [002] ....    10.685674: rtcpu_vinotify_event: tstamp:506433013 tag:CHANSEL_PXL_EOF channel:0x00 frame:12 vi_tstamp:506432310 data:0x04370002
     kworker/2:0-24    [002] ....    10.685677: rtcpu_vinotify_event: tstamp:506433132 tag:ATOMP_FE channel:0x00 frame:12 vi_tstamp:506432326 data:0x00000000
     kworker/2:0-24    [002] ....    10.685678: rtcpu_vinotify_event: tstamp:506456135 tag:ATOMP_FS channel:0x00 frame:13 vi_tstamp:506455752 data:0x00000000
     kworker/2:0-24    [002] ....    10.685680: rtcpu_vinotify_event: tstamp:506472323 tag:CHANSEL_PXL_SOF channel:0x00 frame:13 vi_tstamp:506471941 data:0x00000001
     kworker/2:0-24    [002] ....    10.685681: rtcpu_vinotify_event: tstamp:506474814 tag:CHANSEL_LOAD_FRAMED channel:0x10 frame:13 vi_tstamp:506474435 data:0x08000000
     kworker/2:0-24    [002] ....    10.685683: rtcpu_vinotify_event: tstamp:506945139 tag:CHANSEL_PXL_EOF channel:0x00 frame:13 vi_tstamp:506944439 data:0x04370002
     kworker/2:0-24    [002] ....    10.685684: rtcpu_vinotify_event: tstamp:506945256 tag:ATOMP_FE channel:0x00 frame:13 vi_tstamp:506944455 data:0x00000000
     kworker/2:0-24    [002] ....    10.685686: rtcpu_vinotify_event: tstamp:506968264 tag:ATOMP_FS channel:0x00 frame:14 vi_tstamp:506967881 data:0x00000000
     kworker/2:0-24    [002] ....    10.685687: rtcpu_vinotify_event: tstamp:506984442 tag:CHANSEL_PXL_SOF channel:0x00 frame:14 vi_tstamp:506984070 data:0x00000001
     kworker/2:0-24    [002] ....    10.685689: rtcpu_vinotify_event: tstamp:506987167 tag:CHANSEL_LOAD_FRAMED channel:0x10 frame:14 vi_tstamp:506986794 data:0x08000000
     kworker/2:0-24    [002] ....    10.685690: rtcpu_vinotify_event: tstamp:507457276 tag:CHANSEL_PXL_EOF channel:0x00 frame:14 vi_tstamp:507456568 data:0x04370002
  • In the error case with the v4l pipeline I NEVER get a SOF, until the capture channel is reset after 5 seconds
     kworker/1:2-1140  [001] ....    10.055863: rtos_queue_send_from_isr_failed: tstamp:486912926 queue:0x0b4a7698
     kworker/1:2-1140  [001] ....    10.055869: rtos_queue_send_from_isr_failed: tstamp:486913034 queue:0x0b4ab1a8
     kworker/1:2-1140  [001] ....    10.055871: rtos_queue_send_from_isr_failed: tstamp:486913140 queue:0x0b4acdd8
     kworker/1:2-1140  [001] ....    10.055872: rtos_queue_send_from_isr_failed: tstamp:486913248 queue:0x0b4af718
     kworker/1:2-1140  [001] ....    10.055873: rtos_queue_send_from_isr_failed: tstamp:486913353 queue:0x0b4b04d8
     kworker/1:2-1140  [001] ....    10.055875: rtos_queue_send_from_isr_failed: tstamp:486913457 queue:0x0b4b1298
     kworker/1:2-1140  [001] ....    10.055876: rtos_queue_send_from_isr_failed: tstamp:486913560 queue:0x0b4b2058
     kworker/1:2-1140  [001] ....    10.055878: rtos_queue_send_failed: tstamp:486913999 queue:0x0b4a7698
     kworker/1:2-1140  [001] ....    10.055880: rtos_queue_send_from_isr_failed: tstamp:486916282 queue:0x0b4a7698
     kworker/1:2-1140  [001] ....    10.055881: rtos_queue_send_from_isr_failed: tstamp:486916389 queue:0x0b4ab1a8
     kworker/1:2-1140  [001] ....    10.055882: rtos_queue_send_from_isr_failed: tstamp:486916495 queue:0x0b4acdd8
     kworker/1:2-1140  [001] ....    10.055884: rtos_queue_send_from_isr_failed: tstamp:486916602 queue:0x0b4af718
     kworker/1:2-1140  [001] ....    10.055885: rtos_queue_send_from_isr_failed: tstamp:486916713 queue:0x0b4b04d8
     kworker/1:2-1140  [001] ....    10.055886: rtos_queue_send_from_isr_failed: tstamp:486916816 queue:0x0b4b1298
     kworker/1:2-1140  [001] ....    10.055888: rtos_queue_send_from_isr_failed: tstamp:486916920 queue:0x0b4b2058
     kworker/1:2-1140  [001] ....    10.055889: rtos_queue_send_failed: tstamp:486917863 queue:0x0b4a7698
     kworker/1:2-1140  [001] ....    10.055891: rtos_queue_peek_from_isr_failed: tstamp:487211711 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    10.223826: rtos_queue_peek_from_isr_failed: tstamp:492211702 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    10.391848: rtos_queue_peek_from_isr_failed: tstamp:497211695 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    10.503874: rtos_queue_peek_from_isr_failed: tstamp:502211689 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    10.671860: rtos_queue_peek_from_isr_failed: tstamp:507211683 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    10.839895: rtos_queue_peek_from_isr_failed: tstamp:512211677 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    11.007875: rtos_queue_peek_from_isr_failed: tstamp:517211668 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    11.175891: rtos_queue_peek_from_isr_failed: tstamp:522211661 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    11.343930: rtos_queue_peek_from_isr_failed: tstamp:527211654 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    11.511913: rtos_queue_peek_from_isr_failed: tstamp:532211649 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    11.623921: rtos_queue_peek_from_isr_failed: tstamp:537211643 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    11.791956: rtos_queue_peek_from_isr_failed: tstamp:542211636 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    11.959952: rtos_queue_peek_from_isr_failed: tstamp:547211627 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    12.127954: rtos_queue_peek_from_isr_failed: tstamp:552211621 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    12.295975: rtos_queue_peek_from_isr_failed: tstamp:557211614 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    12.463966: rtos_queue_peek_from_isr_failed: tstamp:562211607 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    12.635981: rtos_queue_peek_from_isr_failed: tstamp:567211601 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    12.747992: rtos_queue_peek_from_isr_failed: tstamp:572211592 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    12.916027: rtos_queue_peek_from_isr_failed: tstamp:577211586 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    13.084025: rtos_queue_peek_from_isr_failed: tstamp:582211580 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    13.252016: rtos_queue_peek_from_isr_failed: tstamp:587211574 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    13.420031: rtos_queue_peek_from_isr_failed: tstamp:592211568 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    13.588037: rtos_queue_peek_from_isr_failed: tstamp:597211560 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    13.700035: rtos_queue_peek_from_isr_failed: tstamp:602211551 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    13.868077: rtos_queue_peek_from_isr_failed: tstamp:607211545 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    14.036057: rtos_queue_peek_from_isr_failed: tstamp:612211538 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    14.204073: rtos_queue_peek_from_isr_failed: tstamp:617211533 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    14.372083: rtos_queue_peek_from_isr_failed: tstamp:622211527 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    14.540097: rtos_queue_peek_from_isr_failed: tstamp:627211525 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    14.708089: rtos_queue_peek_from_isr_failed: tstamp:632211511 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    14.820099: rtos_queue_peek_from_isr_failed: tstamp:637211504 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    14.988125: rtos_queue_peek_from_isr_failed: tstamp:642211497 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    15.156134: rtos_queue_peek_from_isr_failed: tstamp:647211495 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    15.324123: rtos_queue_peek_from_isr_failed: tstamp:652211485 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    15.492136: rtos_queue_peek_from_isr_failed: tstamp:657211478 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    15.660150: rtos_queue_peek_from_isr_failed: tstamp:662211472 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    15.716218: rtos_queue_send_from_isr_failed: tstamp:665146330 queue:0x0b4a7698
     kworker/1:2-1140  [001] ....    15.716224: rtos_queue_send_from_isr_failed: tstamp:665146442 queue:0x0b4ab1a8
     kworker/1:2-1140  [001] ....    15.716225: rtos_queue_send_from_isr_failed: tstamp:665146549 queue:0x0b4acdd8
     kworker/1:2-1140  [001] ....    15.716227: rtos_queue_send_from_isr_failed: tstamp:665146657 queue:0x0b4af718
     kworker/1:2-1140  [001] ....    15.716228: rtos_queue_send_from_isr_failed: tstamp:665146762 queue:0x0b4b04d8
     kworker/1:2-1140  [001] ....    15.716229: rtos_queue_send_from_isr_failed: tstamp:665146867 queue:0x0b4b1298
     kworker/1:2-1140  [001] ....    15.716231: rtos_queue_send_from_isr_failed: tstamp:665146971 queue:0x0b4b2058
     kworker/1:2-1140  [001] ....    15.716233: rtos_queue_send_failed: tstamp:665147588 queue:0x0b4a7698
     kworker/1:2-1140  [001] ....    15.716234: rtos_queue_send_from_isr_failed: tstamp:665149486 queue:0x0b4a7698
     kworker/1:2-1140  [001] ....    15.716236: rtos_queue_send_from_isr_failed: tstamp:665149593 queue:0x0b4ab1a8
     kworker/1:2-1140  [001] ....    15.716237: rtos_queue_send_from_isr_failed: tstamp:665149698 queue:0x0b4acdd8
     kworker/1:2-1140  [001] ....    15.716238: rtos_queue_send_from_isr_failed: tstamp:665149804 queue:0x0b4af718
     kworker/1:2-1140  [001] ....    15.716240: rtos_queue_send_from_isr_failed: tstamp:665149908 queue:0x0b4b04d8
     kworker/1:2-1140  [001] ....    15.716241: rtos_queue_send_from_isr_failed: tstamp:665150010 queue:0x0b4b1298
     kworker/1:2-1140  [001] ....    15.716242: rtos_queue_send_from_isr_failed: tstamp:665150115 queue:0x0b4b2058
     kworker/1:2-1140  [001] ....    15.716244: rtos_queue_send_failed: tstamp:665150541 queue:0x0b4a7698
     kworker/1:2-1140  [001] ....    15.716245: rtos_queue_send_from_isr_failed: tstamp:665207989 queue:0x0b4a7698
     kworker/1:2-1140  [001] ....    15.716246: rtos_queue_send_from_isr_failed: tstamp:665208096 queue:0x0b4ab1a8
     kworker/1:2-1140  [001] ....    15.716248: rtos_queue_send_from_isr_failed: tstamp:665208203 queue:0x0b4acdd8
     kworker/1:2-1140  [001] ....    15.716249: rtos_queue_send_from_isr_failed: tstamp:665208310 queue:0x0b4af718
     kworker/1:2-1140  [001] ....    15.716251: rtos_queue_send_from_isr_failed: tstamp:665208415 queue:0x0b4b04d8
     kworker/1:2-1140  [001] ....    15.716252: rtos_queue_send_from_isr_failed: tstamp:665208519 queue:0x0b4b1298
     kworker/1:2-1140  [001] ....    15.716253: rtos_queue_send_from_isr_failed: tstamp:665208623 queue:0x0b4b2058
     kworker/1:2-1140  [001] ....    15.716254: rtos_queue_send_failed: tstamp:665209064 queue:0x0b4a7698
     kworker/1:2-1140  [001] ....    15.716256: rtos_queue_send_from_isr_failed: tstamp:665212160 queue:0x0b4a7698
     kworker/1:2-1140  [001] ....    15.716257: rtos_queue_send_from_isr_failed: tstamp:665212265 queue:0x0b4ab1a8
     kworker/1:2-1140  [001] ....    15.716258: rtos_queue_send_from_isr_failed: tstamp:665212372 queue:0x0b4acdd8
     kworker/1:2-1140  [001] ....    15.716260: rtos_queue_send_from_isr_failed: tstamp:665212479 queue:0x0b4af718
     kworker/1:2-1140  [001] ....    15.716261: rtos_queue_send_from_isr_failed: tstamp:665212584 queue:0x0b4b04d8
     kworker/1:2-1140  [001] ....    15.716262: rtos_queue_send_from_isr_failed: tstamp:665212689 queue:0x0b4b1298
     kworker/1:2-1140  [001] ....    15.716264: rtos_queue_send_from_isr_failed: tstamp:665212794 queue:0x0b4b2058
     kworker/1:2-1140  [001] ....    15.716265: rtos_queue_send_failed: tstamp:665213743 queue:0x0b4a7698
     kworker/1:2-1140  [001] ....    15.772158: rtcpu_vinotify_event: tstamp:665584142 tag:ATOMP_FS channel:0x00 frame:65 vi_tstamp:665583746 data:0x00000000
     kworker/1:2-1140  [001] ....    15.772162: rtcpu_vinotify_event: tstamp:665600313 tag:CHANSEL_PXL_SOF channel:0x00 frame:65 vi_tstamp:665599935 data:0x00000001
     kworker/1:2-1140  [001] ....    15.772163: rtcpu_vinotify_event: tstamp:665603852 tag:CHANSEL_LOAD_FRAMED channel:0x10 frame:65 vi_tstamp:665603478 data:0x08000000
     kworker/1:2-1140  [001] ....    15.772165: rtcpu_vinotify_event: tstamp:666073136 tag:CHANSEL_PXL_EOF channel:0x00 frame:65 vi_tstamp:666072433 data:0x04370002
     kworker/1:2-1140  [001] ....    15.772166: rtcpu_vinotify_event: tstamp:666073262 tag:ATOMP_FE channel:0x00 frame:65 vi_tstamp:666072449 data:0x00000000
     kworker/1:2-1140  [001] ....    15.772168: rtcpu_vinotify_event: tstamp:666096254 tag:ATOMP_FS channel:0x00 frame:66 vi_tstamp:666095875 data:0x00000000
     kworker/1:2-1140  [001] ....    15.772169: rtcpu_vinotify_event: tstamp:666112445 tag:CHANSEL_PXL_SOF channel:0x00 frame:66 vi_tstamp:666112064 data:0x00000001
     kworker/1:2-1140  [001] ....    15.772171: rtcpu_vinotify_event: tstamp:666114933 tag:CHANSEL_LOAD_FRAMED channel:0x10 frame:66 vi_tstamp:666114559 data:0x08000000
     kworker/1:2-1140  [001] ....    15.772172: rtcpu_vinotify_event: tstamp:666585262 tag:CHANSEL_PXL_EOF channel:0x00 frame:66 vi_tstamp:666584562 data:0x04370002
     kworker/1:2-1140  [001] ....    15.772173: rtcpu_vinotify_event: tstamp:666585380 tag:ATOMP_FE channel:0x00 frame:66 vi_tstamp:666584578 data:0x00000000
     kworker/1:2-1140  [001] ....    15.772175: rtcpu_vinotify_event: tstamp:666608377 tag:ATOMP_FS channel:0x00 frame:67 vi_tstamp:666608005 data:0x00000000
     kworker/1:2-1140  [001] ....    15.772176: rtcpu_vinotify_event: tstamp:666624575 tag:CHANSEL_PXL_SOF channel:0x00 frame:67 vi_tstamp:666624193 data:0x00000001
     kworker/1:2-1140  [001] ....    15.772178: rtcpu_vinotify_event: tstamp:666627503 tag:CHANSEL_LOAD_FRAMED channel:0x10 frame:67 vi_tstamp:666627132 data:0x08000000
     kworker/1:2-1140  [001] ....    15.828194: rtcpu_vinotify_event: tstamp:667097395 tag:CHANSEL_PXL_EOF channel:0x00 frame:67 vi_tstamp:667096691 data:0x04370002
     kworker/1:2-1140  [001] ....    15.828198: rtcpu_vinotify_event: tstamp:667097511 tag:ATOMP_FE channel:0x00 frame:67 vi_tstamp:667096707 data:0x00000000
     kworker/1:2-1140  [001] ....    15.828200: rtcpu_vinotify_event: tstamp:667120512 tag:ATOMP_FS channel:0x00 frame:68 vi_tstamp:667120134 data:0x00000000
     kworker/1:2-1140  [001] ....    15.828201: rtcpu_vinotify_event: tstamp:667136695 tag:CHANSEL_PXL_SOF channel:0x00 frame:68 vi_tstamp:667136323 data:0x00000001
     kworker/1:2-1140  [001] ....    15.828203: rtcpu_vinotify_event: tstamp:667139234 tag:CHANSEL_LOAD_FRAMED channel:0x10 frame:68 vi_tstamp:667138860 data:0x08000000
     kworker/1:2-1140  [001] ....    15.828206: rtos_queue_peek_from_isr_failed: tstamp:667211466 queue:0x0b4b4940
     kworker/1:2-1140  [001] ....    15.828229: rtcpu_vinotify_event: tstamp:667609520 tag:CHANSEL_PXL_EOF channel:0x00 frame:68 vi_tstamp:667608820 data:0x04370002
     kworker/1:2-1140  [001] ....    15.828231: rtcpu_vinotify_event: tstamp:667609645 tag:ATOMP_FE channel:0x00 frame:68 vi_tstamp:667608836 data:0x00000000
     kworker/1:2-1140  [001] ....    15.828232: rtcpu_vinotify_event: tstamp:667632648 tag:ATOMP_FS channel:0x00 frame:69 vi_tstamp:667632263 data:0x00000000
     kworker/1:2-1140  [001] ....    15.828233: rtcpu_vinotify_event: tstamp:667648828 tag:CHANSEL_PXL_SOF channel:0x00 frame:69 vi_tstamp:667648451 data:0x00000001
     kworker/1:2-1140  [001] ....    15.828235: rtcpu_vinotify_event: tstamp:667652243 tag:CHANSEL_LOAD_FRAMED channel:0x10 frame:69 vi_tstamp:667651869 data:0x08000000
     kworker/1:2-1140  [001] ....    15.828236: rtcpu_vinotify_event: tstamp:668121657 tag:CHANSEL_PXL_EOF channel:0x00 frame:69 vi_tstamp:668120949 data:0x04370002
     kworker/1:2-1140  [001] ....    15.828238: rtcpu_vinotify_event: tstamp:668121772 tag:ATOMP_FE channel:0x00 frame:69 vi_tstamp:668120966 data:0x00000000
     kworker/1:2-1140  [001] ....    15.828239: rtcpu_vinotify_event: tstamp:668144774 tag:ATOMP_FS channel:0x00 frame:70 vi_tstamp:668144392 data:0x00000000
     kworker/1:2-1140  [001] ....    15.828241: rtcpu_vinotify_event: tstamp:668160954 tag:CHANSEL_PXL_SOF channel:0x00 frame:70 vi_tstamp:668160581 data:0x00000001
     kworker/1:2-1140  [001] ....    15.828242: rtcpu_vinotify_event: tstamp:668163855 tag:CHANSEL_LOAD_FRAMED channel:0x10 frame:70 vi_tstamp:668163480 data:0x08000000
     kworker/1:2-1140  [001] ....    15.828243: rtcpu_vinotify_event: tstamp:668633777 tag:CHANSEL_PXL_EOF channel:0x00 frame:70 vi_tstamp:668633078 data:0x04370002
     kworker/1:2-1140  [001] ....    15.828245: rtcpu_vinotify_event: tstamp:668633892 tag:ATOMP_FE channel:0x00 frame:70 vi_tstamp:668633095 data:0x00000000
     kworker/1:2-1140  [001] ....    15.828246: rtcpu_vinotify_event: tstamp:668656906 tag:ATOMP_FS channel:0x00 frame:71 vi_tstamp:668656522 data:0x00000000
     kworker/1:2-1140  [001] ....    15.828247: rtcpu_vinotify_event: tstamp:668673081 tag:CHANSEL_PXL_SOF channel:0x00 frame:71 vi_tstamp:668672710 data:0x00000001
     kworker/1:2-1140  [001] ....    15.828249: rtcpu_vinotify_event: tstamp:668675749 tag:CHANSEL_LOAD_FRAMED channel:0x10 frame:71 vi_tstamp:668675374 data:0x08000000
     kworker/1:2-1140  [001] ....    15.884179: rtcpu_vinotify_event: tstamp:669145908 tag:CHANSEL_PXL_EOF channel:0x00 frame:71 vi_tstamp:669145207 data:0x04370002
     kworker/1:2-1140  [001] ....    15.884182: rtcpu_vinotify_event: tstamp:669146030 tag:ATOMP_FE channel:0x00 frame:71 vi_tstamp:669145224 data:0x00000000
     kworker/1:2-1140  [001] ....    15.884184: rtcpu_vinotify_event: tstamp:669169022 tag:ATOMP_FS channel:0x00 frame:72 vi_tstamp:669168651 data:0x00000000
     kworker/1:2-1140  [001] ....    15.884185: rtcpu_vinotify_event: tstamp:669185221 tag:CHANSEL_PXL_SOF channel:0x00 frame:72 vi_tstamp:669184839 data:0x00000001
     kworker/1:2-1140  [001] ....    15.884187: rtcpu_vinotify_event: tstamp:669188279 tag:CHANSEL_LOAD_FRAMED channel:0x10 frame:72 vi_tstamp:669187905 data:0x08000000
     kworker/1:2-1140  [001] ....    15.884188: rtcpu_vinotify_event: tstamp:669658044 tag:CHANSEL_PXL_EOF channel:0x00 frame:72 vi_tstamp:669657336 data:0x04370002
     kworker/1:2-1140  [001] ....    15.884208: rtcpu_vinotify_event: tstamp:669658162 tag:ATOMP_FE channel:0x00 frame:72 vi_tstamp:669657354 data:0x00000000
     kworker/1:2-1140  [001] ....    15.884210: rtcpu_vinotify_event: tstamp:669681161 tag:ATOMP_FS channel:0x00 frame:73 vi_tstamp:669680780 data:0x00000000
     kworker/1:2-1140  [001] ....    15.884211: rtcpu_vinotify_event: tstamp:669697341 tag:CHANSEL_PXL_SOF channel:0x00 frame:73 vi_tstamp:669696969 data:0x00000001
     kworker/1:2-1140  [001] ....    15.884212: rtcpu_vinotify_event: tstamp:669700208 tag:CHANSEL_LOAD_FRAMED channel:0x10 frame:73 vi_tstamp:669699835 data:0x08000000
     kworker/1:2-1140  [001] ....    15.884214: rtcpu_vinotify_event: tstamp:670170168 tag:CHANSEL_PXL_EOF channel:0x00 frame:73 vi_tstamp:670169466 data:0x04370002
  • In the error case with the gstreamer pipeline, I never get any SOF.
     kworker/2:0-24    [002] ....    10.047583: rtos_queue_peek_from_isr_failed: tstamp:487821815 queue:0x0b4b4940
     kworker/2:0-24    [002] ....    10.215613: rtos_queue_peek_from_isr_failed: tstamp:492821809 queue:0x0b4b4940
     kworker/2:0-24    [002] ....    10.383622: rtos_queue_peek_from_isr_failed: tstamp:497821783 queue:0x0b4b4940
     kworker/2:0-24    [002] ....    10.495666: rtos_queue_send_from_isr_failed: tstamp:501254880 queue:0x0b4a7698
     kworker/2:0-24    [002] ....    10.495671: rtos_queue_send_from_isr_failed: tstamp:501254999 queue:0x0b4ab1a8
     kworker/2:0-24    [002] ....    10.495673: rtos_queue_send_from_isr_failed: tstamp:501255108 queue:0x0b4acdd8
     kworker/2:0-24    [002] ....    10.495674: rtos_queue_send_from_isr_failed: tstamp:501255214 queue:0x0b4af718
     kworker/2:0-24    [002] ....    10.495676: rtos_queue_send_from_isr_failed: tstamp:501255328 queue:0x0b4b04d8
     kworker/2:0-24    [002] ....    10.495677: rtos_queue_send_from_isr_failed: tstamp:501255433 queue:0x0b4b1298
     kworker/2:0-24    [002] ....    10.495678: rtos_queue_send_from_isr_failed: tstamp:501255538 queue:0x0b4b2058
     kworker/2:0-24    [002] ....    10.495681: rtos_queue_send_failed: tstamp:501256005 queue:0x0b4a7698
     kworker/2:0-24    [002] ....    10.551622: rtos_queue_peek_from_isr_failed: tstamp:502821779 queue:0x0b4b4940
     kworker/2:0-24    [002] ....    10.663646: rtos_queue_send_from_isr_failed: tstamp:506556554 queue:0x0b4a7698
     kworker/2:0-24    [002] ....    10.663652: rtos_queue_send_from_isr_failed: tstamp:506556670 queue:0x0b4ab1a8
     kworker/2:0-24    [002] ....    10.663653: rtos_queue_send_from_isr_failed: tstamp:506556778 queue:0x0b4acdd8
     kworker/2:0-24    [002] ....    10.663655: rtos_queue_send_from_isr_failed: tstamp:506556885 queue:0x0b4af718
     kworker/2:0-24    [002] ....    10.663656: rtos_queue_send_from_isr_failed: tstamp:506556989 queue:0x0b4b04d8
     kworker/2:0-24    [002] ....    10.663657: rtos_queue_send_from_isr_failed: tstamp:506557093 queue:0x0b4b1298
     kworker/2:0-24    [002] ....    10.663659: rtos_queue_send_from_isr_failed: tstamp:506557198 queue:0x0b4b2058
     kworker/2:0-24    [002] ....    10.663661: rtos_queue_send_failed: tstamp:506557651 queue:0x0b4a7698
     kworker/2:0-24    [002] ....    10.663662: rtos_queue_send_from_isr_failed: tstamp:507144671 queue:0x0b4a7698
     kworker/2:0-24    [002] ....    10.663684: rtos_queue_send_from_isr_failed: tstamp:507144778 queue:0x0b4ab1a8
     kworker/2:0-24    [002] ....    10.663685: rtos_queue_send_from_isr_failed: tstamp:507144885 queue:0x0b4acdd8
     kworker/2:0-24    [002] ....    10.663687: rtos_queue_send_from_isr_failed: tstamp:507144992 queue:0x0b4af718
     kworker/2:0-24    [002] ....    10.663688: rtos_queue_send_from_isr_failed: tstamp:507145104 queue:0x0b4b04d8
     kworker/2:0-24    [002] ....    10.663689: rtos_queue_send_from_isr_failed: tstamp:507145209 queue:0x0b4b1298
     kworker/2:0-24    [002] ....    10.663690: rtos_queue_send_from_isr_failed: tstamp:507145314 queue:0x0b4b2058
     kworker/2:0-24    [002] ....    10.663692: rtos_queue_send_failed: tstamp:507145752 queue:0x0b4a7698
     kworker/2:0-24    [002] ....    10.723661: rtos_queue_peek_from_isr_failed: tstamp:507821773 queue:0x0b4b4940
     kworker/2:0-24    [002] ....    10.887635: rtos_queue_peek_from_isr_failed: tstamp:512821767 queue:0x0b4b4940
     kworker/2:0-24    [002] ....    10.999636: rtos_queue_peek_from_isr_failed: tstamp:517821757 queue:0x0b4b4940
     kworker/2:0-24    [002] ....    11.167667: rtos_queue_peek_from_isr_failed: tstamp:522821751 queue:0x0b4b4940
     kworker/2:0-24    [002] ....    11.335672: rtos_queue_peek_from_isr_failed: tstamp:527821745 queue:0x0b4b4940
     kworker/2:0-24    [002] ....    11.448096: rtos_queue_send_from_isr_failed: tstamp:531629165 queue:0x0b4a7698
     kworker/2:0-24    [002] ....    11.448101: rtos_queue_send_from_isr_failed: tstamp:531629275 queue:0x0b4ab1a8
     kworker/2:0-24    [002] ....    11.448103: rtos_queue_send_from_isr_failed: tstamp:531629383 queue:0x0b4acdd8
     kworker/2:0-24    [002] ....    11.448105: rtos_queue_send_from_isr_failed: tstamp:531629489 queue:0x0b4af718
     kworker/2:0-24    [002] ....    11.448106: rtos_queue_send_from_isr_failed: tstamp:531629594 queue:0x0b4b04d8
     kworker/2:0-24    [002] ....    11.448107: rtos_queue_send_from_isr_failed: tstamp:531629708 queue:0x0b4b1298
     kworker/2:0-24    [002] ....    11.448109: rtos_queue_send_from_isr_failed: tstamp:531629813 queue:0x0b4b2058
     kworker/2:0-24    [002] ....    11.448111: rtos_queue_send_failed: tstamp:531630783 queue:0x0b4a7698

For testing purposes, I changed this value to 10s. It does not seem to change anything.


I will have a closer look at that, but currently don’t see any issues there.


THANK YOU.

hello haensler,

that should be root cause to crash Argus pipeline while you need around 5 seconds to start capturing sometimes.
may I know what kind of cameras they’re. for example, were those connected to CSI directly, or there’s SerDes chip used and also virtual channel supported is enabled?

here’s an approach to enable infinite timeout, this might be an alternative ways to workaround this issue.
for example, you may configure enableCamInfiniteTimeout and test with Argus pipeline.

$ sudo pkill nvargus-daemon
$ export enableCamInfiniteTimeout=1
$ sudo nvargus-daemon &
$ gst-launch-1.0 nvarguscamerasrc ...

Hi,

It is an IMX415-Module and it is directly connected via two CSI-Lanes (CSI4 of the TX2-NX).
There are two more cameras in the system (OV9281), and each is connected via two CSI-lanes (CSI0 and CSI2). I never saw this issue with those cameras.


I partially agree. It is indeed the root cause of why the gstreamer pipeline is crashing.
However, the 5s are not a delay caused by the camera or the start of capturing. As already mentioned, the v4l-pipeline tries to reset the capture channel after those 5s, which then resolves the issue.

Setting enableCamInfiniteTimeout=1 only causes the gstreamer-pipline to wait very long. Unlinke the v4l-pipline, the gstreamer-pipline never attempts to reset the capture channel. Therefore the error is never recovered.

To conclude, in my option the error occurs somewhere between the CSI input of the TX2 and before the video pipeline. The camera seems to start sending images via CSI (I verified with an oscilloscope that there is at least some traffic on CSI), but they are lost somewhere in the processing chain before the pipeline.
The error can be recovered by calling tegra_channel_error_recovery. Alternatively, the error can be recovered by restarting the argus-daemon, which may implicitly do the same.

hello haensler,

there’s Argus error recovery mechanism with the Jetpack-4.6 release version,
it’s software approach, once there’s timeout failure from camera pipeline. Argus will report it via error flag, EVENT_TYPE_ERROR, and the application has to shutdown. you may see-also Argus/public/samples/userAutoExposure for reference.

however, what exactly the failure is before sending images via CSI?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.