Continuing topic 145687 as I still have the issue (was busy with other things and got told to make a new topic if this still was an issue by DaneLLL).
I’m now running the same setup but with a higher input resolution (3840x2160 instead of 2880x2160), and the crashes seem to be more frequent.
The issue is still the same (ATOMP_FRAME_TRUNCATED error when encoding and doing CUDA processing), but after investigating some more it does seem like it doesn’t happen when only using 2 cameras, and seems somewhat related to cpu load?
First of all, the issue only seems to occur when encoding. I’ve tried running with output to nvoverlaysink instead of encoding with omxh264enc or nvv4l2h264enc. When outputting to a screen the CPU usage as stated by htop is at around 85%, spread fairly evenly across cores (no core goes above 50%). This seems to be stable (ran over several days without the issue happening).
When outputting to /dev/null through a nvv4l2h264enc, the total cpu usage for gstreamer processes is at around 93%, still spread fairly evenly across cores (no core going above 50%). However, after some amount of time (ranging from a few seconds to a tens of minutes) the pipeline crashes with
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src2: Could not read from resource. (have had it happen with all three of the sensors).
I have ran the pipeline encoding and outputting to /dev/null with input from two sensors (but otherwise exactly the same) and left it running overnight, and after 18 hours it was still running, so it seems like this might be related to the amount of data coming in from the three sensors?
Could there still be some DVFS happening causing pauses and making the ATOMP buffer overflow?