Whether to enable low_latency when capturing high-speed images?

static int tegra_channel_capture_frame(struct tegra_channel *chan,
				struct tegra_channel_buffer *buf)
{
	int ret = 0;

	if (chan->low_latency)
		ret = tegra_channel_capture_frame_multi_thread(chan, buf);
	else
		ret = tegra_channel_capture_frame_single_thread(chan, buf);

	return ret;
}

When capturing high-speed (1000fps) images, frames will be dropped continuously, but the kernel has not given any error prompts until a long time later, the kernel prints: PXL_SOF syncpt timeout! err = -11.
Do you have any good suggestions
Thank you!

What kind of sensor can output 1000fps? What’s the resolution? If it’s true I think the NVCSI/VI can’t handle it due to bandwidth.

Just a comment to add to what @ShaneCCC mentions: If your device is interrupt driven, then there is a high chance that you would run into IRQ starvation and dropped frames. One would have to buffer several frames together in the camera itself, and then send a batch of frames to reduce IRQ rate. There would be some slight advantage if you wanted to interpolate between batches since you’d get an exact image instead of an interpolation, but latency would not be improved beyond (for the case of IRQ driven) the batch interval.

In other cases where not dependent upon a fixed IRQ rate one would still lose data any time the CPU can’t keep up.

  1. The data is obtained from fpga, the resolution is: 640x128@1000fps
  2. I think it is in interrupt mode, but I don’t know where to confirm
  3. If buffer several frames together , there will be a large delay at low speed. Can V4L2 reset the acquisition resolution without exiting the process?
    4.I don’t quite understand what you mean:

Thank you!

Try boosting the NVCSI/VI clocks and have system run as performance to check if any improve.

https://elinux.org/Jetson/l4t/Camera_BringUp

It’s better than before, but there is still a problem of frame loss. In addition, if I can reconfigure the acquisition resolution of v4l2 without exiting the process, my problem can also be solved.
Thank you!

If you were to lose frames while not batching, then there would be no possibility of ever finding what was actually in those frames; as a result you could choose to use some sort of interpolation to guess at what was in the missing frames. If you were to batch, then you could spend more time (more latency) and recreate the frames (although at a lower throughput, you would not need any kind of “guess” what was in the missing frames because they wouldn’t be missing…they would simply be very slow at arriving).

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.