We have two IMX274 cameras plugged into the same TX2 and are using Video4Linux to read raw Bayer frames from them. Both our code and the v4l2-ctl program exhibit the same behavior. The command we are using to show this behavior is: ‘v4l2-ctl -d /dev/video1 --set-fmt-video=width=3840,height=2160,pixelformat=RG10 -p 60 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=500’ for the seconds camera and the same with /dev/video0 for the first camera.
When we run just one instance of this program at a time, either camera runs at 60fps. When we run the second program before the first has finished, the second camera opened runs at 1fps rather than 60fps. If we then re-start the first camera while the second is running at 1fps, it also runs at 1fps. This makes it look like it is not a bandwidth limit but rather some sort of system state change that is causing the behavior.
Ubuntu 16.04.05. Jetpack 3.3.
Thanks for any help!
(We are having different difficulties on TX1 JetPack 3.1 which I’d be glad to discuss in another thread, but we’re hoping to get something running quickly to do performance tests on our own code so I’ll leave them to another day.)
Get 1fps should be csi/vi didn’t get any validate frame from sensor. It’s could be the sensor didn’t output correct data.
I could believe that. We were getting explicit timeouts and read failures in other conditions on the TX1 from the same code when the cameras were not working, but it could be that this is yet another way for it to fail. I’d be a bit surprised if it failed silently in this case given its vocal failures in others, but it could happen.
Any idea how to fix it? These are two independent processes running on two independent devices that work correctly when run separately and do not work correctly when run at the same time.
IT’s better to probe the sensor output to make sure the HW signal without problem first.
Okay. We checked the images coming from the cameras when they are operating correctly, at 60fps. We’re reading 10-bit Bayer data and the images are the correct sizes and appear to have good data in them. We are also able to run our custom algorithms on the data and get good results when the cameras are operating properly.
We’re able to run each camera individually and get good results. Both of the cameras are working. We can run one camera, then the other, using the same program and everything is fine. The trouble comes when we start the second instance of the same program before the first one has finished. If we wait until it has finished (or if we kill the program), we can run again and it is fine. This indicates that the hardware is working fine.
I understand both of them working individually. What’s I want to clarify is if any HW or driver affect each others.
The video captured when the system is running normally has images. The video captured by the slow-running (1fps) version has a black frame. Also, one time when I ran the program on video0 while another was running on video1 it slowed down the already-running program on video1. Normally this is not the case, and the original capture finishes normally while only the second program run slows down.
If we attempt to open both devices in the same process (and same thread), we get the same slow behavior on the second one that is opened.
When we look at the raw captures images from the cameras running at 60fps they contain noise (brighter noise on the top few lines, then darker for the rest of the image). When we run argus_camera, we can see 3840x2160x10fps videos fine. The 1fps images are completely black, but the 60fps images captured through Video4Linux at 10-bit Bayer do not seem to be correct either.
When we were testing on the TX1, we were able to get good images from one of the cameras (no video from the other one). Looks like something in our setup on the TX2 is not quite right. Will look into getting good images and then retry the multi-camera code.
We are able to capture images from each camera at 1920x1080 at 120 frames/second and get good images for all of the images on each camera individually. (Part of our issue before was that we were converting raw images to PGM without regard to endianness and the need to shift bits. Once we did that, we could run 1920x1080 fine. We also focused both cameras on the same target so that we can compare images more easily.)
In this case, we’re getting similar behavior – full-speed capture on the first camera run and 1fps on the second. However, the images on the second camera are black only for the first few frames and then they are the correct color. It looks like they may be the correct color after the first camera’s run finishes (although the frame rate remains at 1fps even after the first camera’s run finishes).
Thanks for any advice on how to continue to debug this.
Also, maybe one time out of four both streams go at the full video rate.
Try below command before capture to check if any help.
echo 1 > /sys/kernel/debug/bpmp/debug/clk/vi/mrq_rate_locked
echo 1 > /sys/kernel/debug/bpmp/debug/clk/isp/mrq_rate_locked
echo max_rate > /sys/kernel/debug/bpmp/debug/clk/vi/rate
echo max_rate > /sys/kernel/debug/bpmp/debug/clk/isp/rate
I had run jetson_clocks.sh before and it did not help. Tried it again along with the other commands listed. the output was as follows:
root@tegra-ubuntu:/home/nvidia# echo 1 > /sys/kernel/debug/bpmp/debug/clk/vi/mrq_rate_locked
root@tegra-ubuntu:/home/nvidia# echo 1 > /sys/kernel/debug/bpmp/debug/clk/isp/mrq_rate_locked
root@tegra-ubuntu:/home/nvidia# cat /sys/kernel/debug/bpmp/debug/clk/vi/max_rate
root@tegra-ubuntu:/home/nvidia# cat /sys/kernel/debug/bpmp/debug/clk/isp/max_rate
root@tegra-ubuntu:/home/nvidia# echo max_rate > /sys/kernel/debug/bpmp/debug/clk/vi/rate
bash: echo: write error: Input/output error
root@tegra-ubuntu:/home/nvidia# echo max_rate > /sys/kernel/debug/bpmp/debug/clk/isp/rate
bash: echo: write error: Input/output error
When I do all of these, I am able to read 1920x1080@120fps from both cameras reliably using v4l2-ctl. Looks like it was the extra clock settings that did it!