This shouldn't be this hard, recording from 2 cameras

Hello there, i usually try not to bother (and there’s already been a handful of posts about multiple cameras) but from all the posts i’ve read there seems to be some helpful and knowledgeable people here.

tl;dr: We’re trying to record two videos from two cameras on a Jetson Nano, and there’s always a new problem…

  • First we tried with USB cameras, and after trying multiple models and trial and error, we got something working, but e-con’s cameras didn’t capture enough light at enough framerate (even if our conditions aren’t very demanding, just at least 20-30 fps in a decently lighted football/soccer pitch). They said they would get back to me on this, haven’t yet.

  • Then we got some CSI cameras from them, managed to install their drivers, and got very good results. So we bought Leopard Imaging’s carrier board (LI-NANO-CB) so that we can have (more than) 2 MIPI-CSI jacks… but then we can’t use e-con’s drivers without overwritting Leopard’s drivers (from what i investigated in the flashing procedures both replace the /boot/Image and the device tree), and neither support team has given me a solution for this. I don’t have the expertise to personally write drivers (and it seems weird that i should). If anyone has any idea i’d love a helping hand!

  • So we tried Leopard’s CSI cameras (LI-IMX219-MIPI-FF-NANO), and when i got those working… the quality doesn’t seem too good (and has the following problem also).

  • An additional problem is that in most cases even if we have a fast microSD card (https://www.walmart.com/ip/Onn-128GB-Class-10-U3-V30-microSDXC-Flash-Memory-Card-up-to-100MB-s-read-speed/772835319) the Nano can’t seem to keep up with writing HD (1280x720) or larger frames. What i have in mind for the moment to solve this is to switch from what we’re using now, which is OpenCV with Gstreamer backend (for LI’s cameras), to directly writing to file with a Gstreamer pipeline which will capture both cameras at the same time… even though this is not an excellent solution because then we can’t process the frames before writing them to file. Is there some way to quickly compress them in OpenCV before writing them? Or any other ideas?

We don’t need perfect syncro between the cameras, e.g. our current code simply grabs from each camera and that frame-time difference is already great for us, then sends the frames to other threads which write them (so we know reading frames is not the slow part), but when they’re big enough they’re way too slow.

Does someone have any solution/suggestion/opinion on any of our problems? I don’t feel like what we’re looking for should be this hard (we’re not even doing any computer vision on the Nano, which i originally had in mind!), but then again we’ve been going back and forth for over a month now and we seem to find problems on every corner!

Cheers,
Martín

hello martin.fanaty,

may I know what’s the commands you’re used to save recorded files locally.
you should also narrow down the issue to check if this related to I/O issue, by enable single camera for file writing.
could you please refer to below commands to enable single camera for video recording.
thanks

$ gst-launch-1.0 nvarguscamerasrc sensor-id=0 num-buffers=300 ! 'video/x-raw(memory:NVMM), width=1280, height=720, framerate=30/1' ! nvtee ! omxh264enc bitrate=20000000 ! qtmux ! filesink location=video.mp4

Hello JerryChang, thank you for your reply.

We’re currently using OpenCV with a Gstreamer pipeline. It looks something like this:

import cv2 as cv
...
# Gstreamer pipeline for Leopard's CSI camera
gst_pipe = ('nvarguscamerasrc sensor-id={idx} maxperf=1 ! ' +
            'video/x-raw(memory:NVMM),format=(string)NV12,' +
           f'width=(int){3280},height=(int){2464},framerate=(fraction){fps}/1 ! ' +
           f'nvvidconv flip-method=(int){flip} ! ' +
           f'video/x-raw,width=(int){width},height=(int){height},format=I420 ! ' +
            'videoconvert ! video/x-raw,format=(string)BGR ! appsink')
caps[0] = cv.VideoCapture(gst_pipe, cv.CAP_GSTREAMER)
...

which we can do for each camera and then alternate reading frames from each, after which we can save them to their files, all with OpenCV. This is slow, i know, because the frames are being saved directly. Reading frames is fine, only the writing takes a lot of time. Ideally there’d be a way for writing each frame to take less time.

So that’s why i was thinking of switching to saving directly to file, similar to the command you posted. I’ve already tested the following and had no issues:

gst-launch-1.0 -e \
    nvarguscamerasrc maxperf=1 sensor-id=0 num-buffers=300 ! 'video/x-raw(memory:NVMM), width=(int)3280, height=(int)2464, format=(string)NV12, framerate=(fraction)30/1' ! nvv4l2h265enc control-rate=1 bitrate=8000000 ! 'video/x-h265, stream-format=(string)byte-stream' ! h265parse ! qtmux ! filesink location=test_1.mp4 -e \
    nvarguscamerasrc maxperf=1 sensor-id=3 num-buffers=300 ! 'video/x-raw(memory:NVMM), width=(int)3280, height=(int)2464, format=(string)NV12, framerate=(fraction)30/1' ! nvv4l2h265enc control-rate=1 bitrate=8000000 ! 'video/x-h265, stream-format=(string)byte-stream' ! h265parse ! qtmux ! filesink location=test_2.mp4 -e

The command you wrote also works, i just tested it (by the way, is using omxh264enc better than nvv4l2h265enc? What are the differences? I based mine on the Accelerated Gstreamer User Guide R32.1).

The problem is that if i change to using gstreamer directly it’s harder to control the stream. For example, if i want to save videos of 1 minute each and not lose frames in the middle. Or if i want to save a picture while already recording. Also, i’m not sure using this will definitely have them roughly synchronized, i’ll have to test some more for that.

Please advise, or if something wasn’t clear, just ask.
Martín

You may use BGRx instead of I420 as output of nvvidconv, it would make videoconvert task lighter. You may also try adding queue plugin between videoconvert and appsink.

For output, you may also use a cv::videoWriter with a gstreamer pipeline for writing file encoding with HW (appsrc → queue → videoconvert into BGRx → nvvdiconv into I420 or NV12 in NVMM memory → nvv4l2h264enc → h264parse → qtmux or matroskamux → filesink).

I had switched from BGRx to I420 for some reason and hadn’t rolled it back. Still that change alone didn’t do much.

But the other suggestion could be great! I didn’t know cv::VideoWriter also accepted Gstreamer pipelines. I will be testing it and write back tomorrow, thank you!

@Honey_Patouceul

After some trial and error, i made a minimal example and this works great, with any of these pipelines: https://pastebin.com/7ihyPg06

In some of these tests it throws some lines like the following, even if it still seems to work:

>>> out = cv.VideoWriter(out_gst_pipe, cv.CAP_GSTREAMER, cc, 21, (1280, 960))
Failed to query video capabilities: Inappropriate ioctl for device
Opening in BLOCKING MODE

Also when i release, it throws the following:

>>> out.release()

(python3:7886): GStreamer-CRITICAL **: 11:34:33.950: gst_mini_object_unref: assertion 'GST_MINI_OBJECT_REFCOUNT_VALUE (mini_object) > 0' failed

(Edit: it seems this was fixed in R32.2? https://devtalk.nvidia.com/default/topic/1064032/jetson-agx-xavier/error-encoding-with-opencv-gstreamer-and-omxh264enc-nvv4l2h264enc/post/5388680/#5388680 )

Anyway, that’s doesn’t seem to affect the result. The real problem is this didn’t seem to work on our application, which is very similar to that test script (uses same pipelines). It “writes” about 10 times (actual file is 0 bytes) and then hangs. Even if i don’t write, trying to release it also hangs.

The only thing i can think of is that a cv::VideoWriter with Gstreamer pipeline doesn’t work well in multi-threaded environments?[/b] I think i saw another thread mentioning something like that about nvv4l2h264enc and omxh264enc. In any case it does seem to work in single-threaded, so i will rewrite it like that.

I will do some more tests and try to fix some remaining issues (such as sped-up video and hanging on out.release() or cap.release()). If everything works i’ll come back to mark as resolved.

Of course if you have any other suggestions they will be very well received.
Thanks!

I rewrote the application to be single-threaded but… VideoCapture::release() hangs when releasing Leopard’s cameras. This means i have to restart system after each time i use the camera. It could be related to the assertion fail i mentioned on the previous post, so it may be fixed in R32.2 (i have R32.1).

So… sorry for the noob question but i can’t find it… is there a way to upgrade L4T without flashing from scratch (and having to rebuild OpenCV, for example)? If there’s instructions somewhere i’ve missed i’d appreciate the link.

Thank you,
Martín

hello martin.fanaty,

currently, you’ll need to perform a re-flash to update the L4T version.
suggest you should also contact with Leopard team to confirm you’re using the latest kernel driver.
thanks

Ok, i’ll have to try that then, though they are kind of slow to answer.

One more problem is that suddenly, now when i run in OpenCV (VideoCapture w/ Gstreamer backend) the capturing pipeline is giving me a poor 15-16 fps (the grab() is what takes so much time, as if it intended another framerate)… but running it with gst-launch-1.0 doesn’t have that problem!

Even the minimal example i shared a couple of posts ago through a pastebin is having this problem, but when saving directly to file with gst-launch-1.0 framerate looks normal.

I have no idea what could be causing this and i have double-checked the pipelines are pretty much the same (except that for OpenCV it has videoconvert to BGR). I have tried different resolutions, adding a queue element before appsink… nothing seems to change this.

Any ideas??

Hi,
The reason is same as
https://devtalk.nvidia.com/default/topic/1066465/jetson-nano/nano-not-using-gpu-with-gstreamer-python-slow-fps-dropped-frames/post/5401198/#5401198

You may check if you can run gstreamer + cuda::gpuMat or mmapi + cuda::gpuMat
https://devtalk.nvidia.com/default/topic/1022543/jetson-tx2/gstreamer-nvmm-lt-gt-opencv-gpumat/post/5311027/#5311027
https://devtalk.nvidia.com/default/topic/1047563/jetson-tx2/libargus-eglstream-to-nvivafilter/post/5319890/#5319890

Our application is written in Python and OpenCV lets us synchronize the cameras (and process the frames lightly if we wanted).

What you are proposing is to move processing to a Gstreamer filter, right? But then it would have to be in C++, then i don’t know how i could synchronize the frames from there, and i wouldn’t have as much control from the application (such as starting, stopping, cutting, saving images) without some complicated solution. (I also have no experience with writing Gstreamer applications so that would take me even longer).

Is there maybe a way to just receive GpuMat’s from VideoCapture::read()/::retrieve() ? I could even write this small part in C/C++ if it’s necessary. Or some other solution to make the ::grab() faster? (because remember that is what’s taking too long in the capturing loop).

To be clear, when i first tried the VideoWriter Gstreamer pipeline (post #6) the whole flow seemed to work with great framerate. But when i tried it again on Monday it had dropped. I’m not sure what could have caused this, i checked that it’s receiving 12V and it’s in MAXN mode, checked jetson_clocks … is there anything else that could be causing this slowdown?

You’ve been very helpful so far, i felt like we were close before the framerate worsened.

Well i don’t know how or why but it started working again! I can capture 2 cameras, save them synchronized, while doing other operations on the board all at the same time and it does so at the correct framerate (in this case 21 fps) and full resolution downscaled to 1280x960. I even rebooted with fingers crossed and it keeps working. I hope this doesn’t magically change again.

I haven’t reflashed yet, but the cameras seem to release good enough (even though the “CRITICAL” message keeps appearing).

Thank you so much for your help! I’ll mark this as solved (and hope it continues that way).

Sincerely,
Martín

Well, it was mostly solved… but now i realize the synchronization is not that good. There’s a (consistent?) bigger delay on one camera with respect to the other one. They’re both being captured the same way, with the OpenCV + Gstreamer pipeline. What could be causing this?

I understand perfect synchro is a harder thing to achieve (better done with a hardware solution maybe), but i don’t have such a stringent requirement: i will have no problem with a small (0.2 seconds?) difference.

It seems like there’s a noticeable delay between real-time and effective capture, what could cause this to be different for each camera?

Will be trying to set same capture properties and see if it helps.

Hi,
There is a Argus sample demonstrating sensor synchronization.

tegra_multimedia_api\argus\samples\syncSensor

FYR. You may give it a try if you are interested in using tegra_multimedia_api.

I was finally able to solve most (if not all) of the problems! (knock on wood)

  • I wasn't able to run most of the samples i tried from tegra_multimedia_api (had similar problems as some other posts mention) but it doesn't matter at least for now.
  • To fix the camera synchronization i simply [good programmers cover your eyes] ... added a ``` time.sleep(0.3) ``` before the capture loop, this seems to do the trick so that the cameras line up very very well, amazingly.
  • I'm also able to get the capture properties from one camera (by running it for a couple of seconds in autoexposure), then using `v4l2-ctl` to get the exposure and [analog] gain values, scale them (exposure from us to ns, gain divided by 16), and finally use them for manual exposure capture on both cameras. (Idea from a post i can't find now.) Adjusting is not "live" but this is completely fine for our use case.
  • So the LAST detail that i'm missing concerns white balance: i'm using nvarguscamerasrc, can it be set manually? I have the feeling that when awblock is set to true then wbmode is not regarded, as if 'awblock=true wbmode=X' with any X in [0-9] shows the same thing. On the other hand, having awblock=false behaves as if the wbmode does have a purpose. In any case, it isn't too precise. So, two questions:
  1. How do i set a manual white balance? I can set wbmode=9 (manual) but then how do i specify the temperature? Is this something overdue from the nvcamerasrc -> nvarguscamerasrc migration?
  2. How do i get the last set white balance? `v4l2-ctl --list-ctrls-menus` doesn't list any options to get it. Is there another tool i should use?

Oh, one more small question, just in case anyone knows it: as previously said i’m using Leopard’s carrier board. I have identified the sockets CAM1, CAM4 and CAM5 correspond respectively to sensor-id=0, 2 and 3. Now, to use v4l2-ctl i also had to test until i saw that sensor-id=2 is /dev/video1 and sensor-id=3 is /dev/video2. Is there some logic or automatic way to link these? Can i be sure that on any restart these orders will be maintained? (a note: this LI-NANO-CB’s drivers work in such a way that there’s always /dev/video 0 to 3, no matter what’s connected)

Anyway, i’m pretty happy to say the least. Just need these last details. Of course if anyone stumbles by this thread and would like some help to achieve something similar just ask.

Thanks again for any and all help!
Martín

hello martin.fanaty,

you might refer to L4T Multimedia API Reference for auto control settings, you may use APIs to control AWB settings.
please also refer to Argus samples as below,
thanks

$l4t-r32.2/public_sources/tegra_multimedia_api/argus/samples/userAutoWhiteBalance

  1. Please check the argus API for the details.
  2. v4l2-ctl didn’t include ISP pipeline so don’t have that.
  3. The sensor id is the list order of the modulex from the tegra-camera-platform however the video node is register by the i2c bus order. You can follow this rule to design your use case.

Alright, for the moment we’ll manage without setting the “perfect” WB (as we’re using nvarguscamerasrc and moving to using the Multimedia API would be too much work).

Thanks again.
Martín