Hey,
Board
: AGX Orin
L4T
: 36.4.0
I am developing a driver following Welcome — NVIDIA Jetson Linux Developer Guide 1 documentation
When testing the driver with the following command:
v4l2-ctl --stream-mmap --stream-count=1 -d /dev/video0 --stream-to=frame.raw
Dmesg shows the following errors:
[ 182.553602] tegra-camrtc-capture-vi tegra-capture-vi: corr_err: discarding frame 0, flags: 0, err_data 131072
How can I debug this error?
Thanks !
1 Like
Hi,
For the camera basic functionality first needs to check the device and driver configuration.
You can reference to below program guide for the detailed information of device tree and driver implementation.
https://docs.nvidia.com/jetson/archives/r36.3/DeveloperGuide/SD/CameraDevelopment/SensorSoftwareDriverProgramming.html?highlight=programing#sensor-software-driver-programming
Please refer to Applications Using V4L2 IOCTL Directly by using V4L2 IOCTL to verify basic camera functionality.
https://docs.nvidia.com/jetson/archives/r36.3/DeveloperGuide/SD/CameraDevelopment/SensorSoftwareDriverProgramming.html?highlight=programing#to-run-a-v4l2-ctl-test
Once confirm the configure and still failed below link help to get log and some information and some tips for debug.
https://elinux.org/Jetson/l4t/Camera_BringUp#Steps_to_enable_more_debug_messages
Thanks!
hello user27558,
please see-also Topic 318537 for some information to debug discarding frame corr_err messages.
Hello @JerryChang and @carolyuu, thanks a lot for your help.
I followed the debug steps, and especially those :
sudo su
echo 1 > /sys/kernel/debug/bpmp/debug/clk/nvcsi/mrq_rate_locked
cat /sys/kernel/debug/bpmp/debug/clk/nvcsi/max_rate | tee /sys/kernel/debug/bpmp/debug/clk/nvcsi/rate
And miraculously after that, nvarguscamerasrc
worked, same for v4l2-ctl
.
And indeed I can get a picture:
- How do I solve this issue, so that I don’t have to set the
nvcsi
clock?
- What does it mean that setting the clock to max rate solves the issue?
note: I am using a SerDes setup
Thanks a lot for your help !
hello user27558,
as you can see in the developer guide, sensor pixel clock configuration must be set correctly to avoid potential issues.
you may double check SerDes Pixel Clock section to review the clock settings, or, please try setting a higher clock rate for testing.
1 Like
Hey ! Thanks for your answer!
Yes I saw that in the documentation but I have a question. In the following formula:
serdes_pix_clk_hz = (deserializer output data rate in hertz) * (number of CSI lanes) / (bits per pixel).
- What value is
deserializer output data rate in hertz
? How do I get it?
- Is
number of CSI lanes
the number per camera or per deser ?
Thanks!
hello user27558,
>> Q1
it’s depends-on your SerDes chip settings, please check the output clock rate.
>> Q2
please consider number of CSI lanes per deser since you’ve SerDes setup.
@JerryChang
it’s depends-on your SerDes chip settings, please check the output clock rate.
It is 4700 MHz
The camera output pixel rate per lane is ~= 58.667 MHz
hello user27558,
you may follow the formula to configure device tree settings,
Hey @JerryChang,
So, if I understand correctly it would be the following:
4 700 000 000 ⋅ 4 / 12 = 1 566 666 667 Hz
Is it correct?
hello user27558,
yes, please give it a try.
please also note that, skew calibration is required if sensor or deserializer is using DPHY, and the output data rate is > 1.5Gbps.
Will try. And the skew calibration is a problem for me as I dont believe our hardware supports it at the moment.
Can you suggest what would be the best course of action for us to limit our output rate to 1.5Gbps ?
hello user27558,
unfortunately, deskew calibration is a must if data-rate > 1.5 Gbps, else the camera firmware will continue to wait for deskew signal from the sensor side. it only enable pixel parser when deskew calibration has completed.
Then how to decrease the data rate? Any suggestion?
hello user27558,
Jetson platform is passive device as a receiver, you’ve to reduce the output data rate from SerDes side.