I’m having a pretty weird issue here and I cannot manage to actually find any previous threads on this.
I’m running a Gstreamer pipeline with multiple cameras (3x 2880x2160) using v4l2src with the “userptr” io-mode. This goes into a custom gstreamer element that does CUDA processing, and outputs frames that get sent to either nvv4l2h264enc or omxh264enc (both cause issues).
I am using host-allocated, pinned memory for the buffers provided to v4l2src (allocated using cudaHostAlloc).
This works fine for a while, but usually after a few minutes the pipeline gets an error from the v4l2src:
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src1: Could not read from resource. Additional debug info: gstv4l2bufferpool.c(1040): gst_v4l2_buffer_pool_poll (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src1: poll error 1: Resource temporarily unavailable (11)
Checking the system log gives the generic “PXL_SOF syncpt timeout! err = -11” error.
After enabling logging (https://elinux.org/Jetson_TX2_Camera_BringUp) I see that the issue seems to be that the frame is truncated:
rtcpu_vinotify_event: tstamp:115378036900 tag:ATOMP_FRAME_TRUNCATED channel:0x02 frame:0 vi_tstamp:115378036485 data:0x00000000
From what I’ve managed to understand by searching through the TRM, it seems like this event is sent from the OFIF and is caused by memory controller congestion? If this is the case, what are some ways of mitigating this?