Three cameras on Jetson Nano

I’m trying to use three web cams on a Jetson Nano for inference. While it seems to be possible to use MJPG and/or H.264 @ 640 x 280, I’m running into

GStreamer warning: Embedded video playback halted; module v4l2src2 reported: Failed to allocate required memory.

when I try to open the cameras in YUV (x-raw), even in the lowest resolution (320x240). I already tried to follow this gist in order to allow for four cameras, to no avial.

I know it is not related to NVIDEA, but maybe somebody has a hint how to make 3 cams running using uncompressed raw video input.

It turns out, that it is not even possible to use three cams with MJPEG encoding :(

Error is

gst_v4l2_buffer_pool_streamon:v4l2src2:pool:src error with STREAMON 28 (No space left on device)

On whatever device…

Hi,
Suggest you use USB3 cameras to get 5Gbps. Please take a look at

Thanks for the reply. I’m not sure I understand: You say, it could be a “bandwidth limitation problem”, whereas the error message clearly talks about memory issues?

I’m having here this simple Python3 OpenCV script with runs into this problem too:

import cv2


streams = []
indexes = [0, 1, 2]


for index in indexes:
    stream = cv2.VideoCapture(index)
    stream.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*"MJPG"))
    stream.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
    stream.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
    streams.append(stream)

while True:
    i = 0
    for index in indexes:
        _, frame = streams[i].read()
        frame = cv2.resize(frame, (320, 240))
        cv2.imshow("Display "+str(index), frame)
       
        i = i + 1
    
    if cv2.waitKey(1) & 0xFF == ord('q'):
          break

On the Jetson already the attempt to re-configure the capture to MJPG fails in the GStreamer log (setProperty, unhandled property):

neil@jetson:~/jetson-inference/build/aarch64/bin$ python3 test.py
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (1184) setProperty OpenCV | GStreamer warning: GStreamer: unhandled property
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (1184) setProperty OpenCV | GStreamer warning: GStreamer: unhandled property
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (1757) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module v4l2src2 reported: Failed to allocate required memory.
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (886) open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
VIDIOC_STREAMON: No space left on device
Traceback (most recent call last):
  File "test.py", line 22, in <module>
    frame = cv2.resize(frame, (320, 240))
cv2.error: OpenCV(4.1.1) /home/nvidia/host/build_opencv/nv_opencv/modules/imgproc/src/resize.cpp:3720: error: (-215:Assertion failed) !ssize.empty() in function 'resize'

The same sequence works perfectly on a Raspberry PI 4

Also tried now three of this kind, with varying /dev/videoX:

gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,format=YUY2,width=320,height=240,framerate=30/1 ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! nveglglessink

The third opening leads to “Failed to allocate memory”:

neil@jetson:~$ gst-launch-1.0 v4l2src device=/dev/video2 ! video/x-raw,format=YUY2,width=640,height=480,framerate=30/1 ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! nveglglessink
Setting pipeline to PAUSED ...

Using winsys: x11 
Pipeline is live and does not need PREROLL ...
Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
Setting pipeline to PLAYING ...
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Failed to allocate required memory.
Additional debug info:
gstv4l2src.c(658): gst_v4l2src_decide_allocation (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
Buffer pool activation failed
Execution ended after 0:00:01.014041254
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

Hi,
Please check dmesg to see if the cameras are enumerated as SuperSpeed. If it is high-speed, is it USB2 bandwidth 480Mbps.

Well, I know, that these cameras are plain USB2 cameras. But I have two sets of three from the same brand ELP:

With one set I can achieve 3 cam capture 640 x 480 30 fps, but need to use MJPEG compression. The inference rate is about 20 fps per camera, which is good, but latency seems to be (from the preview) about 1 s for each camera. The same cameras are able to be operated in 320 x 240 30 fps YUY2, same inference result.

But the other set cannot even be operated in 640 x 480 MJPEG, obviously because the framerate is 60 fps.

My questions:

a) Is an inference rate for the resnet FP16 of 20 fps for each cam the maximum I can achieve with 3 USB cams? Would that be better with the USB3 cams you suggest?

b) I don’t see a big difference in inference rate and quality, if I operate the cams in 320x240. Is it suggested to capture that low in order to relief the USB bus or should I better try to capture bigger images?

Hi,
Performance of deep learning inference depends on the models. We have a sample config file for Jetson Nano:

/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt

The model is resnet10.

For sources in MJPEG, you may refer to this patch:

It uses jpegdec in the patch. Another working solution is to use nvv4l2decoder mjpeg=1.

Thanks @DaneLLL. TBH, I can’t see nvjpecgec in this reference.

This is my working pipeline:

gst-launch-1.0 v4l2src device=/dev/video0 
! "image/jpeg,width=1280,height=720,framerate=30/1" 
! jpegdec 
! videoconvert 
! nvvideoconvert 
! "video/x-raw(memory:NVMM),fromat=NV12" 
! m.sink_0 nvstreammux name=m batch-size=1 width=640 height=480 
! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary.txt 
! fakesink

Could you please give me a hint, how this would look like with “nvjegdec” or “nv4ldecoder mjepg=1”?

Hi,
Sorry I made a typo. It should be jpegdec.

OK thanks for clarifying. Any sample for the above statement available?

Hi,
The deepstream-image-decode-test sample demonstrates

multifilesrc ! jpegparse ! nvv4l2decoder mjpeg=1 ! ...

You may refer to it and try

v4l2src ! jpegparse ! nvv4l2decoder mjpeg=1 ! ..

Cool. Thanks. Will try and report

Hmm. No, doesn’t work…

This pipe works:

gst-launch-1.0 v4l2src device=/dev/video0 ! "image/jpeg,width=640,height=480" ! jpegdec ! videoconvert ! nvvideoconvert ! "video/x-raw(memory:NVMM),fromat=NV12" ! m.sink_0 nvstreammux name=m batch-size=1 width=640 height=480 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary.txt ! nvdsosd ! nvegltransform bufapi-version=true ! nveglglessink qos=false async=false sync=false

while this does not:

gst-launch-1.0 v4l2src device=/dev/video0 ! "image/jpeg,width=640,height=480" ! jpegparse ! nvv4l2decoder mjpeg=1 ! nvvideoconvert ! "video/x-raw(memory:NVMM),fromat=NV12" ! m.sink_0 nvstreammux name=m batch-size=1 width=640 height=480 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary.txt ! nvdsosd ! nvegltransform bufapi-version=true ! nveglglessink qos=false async=false sync=false

Error is:

bash: ! nvv4l2decoder mjpeg=1: event not found

EDIT: Works after typo fixed

Opps, sorry typo: nvvvideoconvert instead of videoconvert…

Works, but has color conversion issues. This looks exactly as if the wrong YUV decoding model is applied (see the color block). Any suggestion?

OMG, there is still another typo: fromat instead of format. The first pipe didn’t care, it seems. Fixing it on the second makes the second non functional… Removed the NV12 format specifier, but the colour conversion issue remains…

Hi,
Please apply the prebuilt library and try again.
Jetson/L4T/r32.4.x patches - eLinux.org
[GSTREAMER]Prebuilt lib for decoding YUV422 MJPEG through nvv4l2decoder

Perfect. Works

OK. I’m a bit surprised now. The inference rate has been slowed down from 20 fps per camera to 16-17 fps per camera. I don’t see a remarkable difference between the jpegparse ! nvv4l2decoder mjpeg=1 and the jpegdec ! videoconvert approach w.r.t. fps and latency. But I’m missing 4 to 5 fps now :(

I didn’t make a backup. Could one please send me the original ./usr/lib/aarch64-linux-tegra/libnvtvmr.so? I think it was of August 2020.

As already reported, the patch for the colour space conversion dropped the achievable frame rate on all three cams by 5 fps per camera. Since I could not revert the file change I flashed a new image and started from scratch.

In the end I lost everything (not the sources, but a running system).

  1. Since I flashed my first SDK (End of December) the JetPack subversion changed from 4.4. to 4.5. However, the "How to " Guide does not reflect this and is still referring to JP 4.4.

  2. My 3 cam python script, which worked fine with JP 4.4, is still working, but just with one cam. If I add another or try to work with three cams simultaneously, the whole script crashes with “segmentation fault”

  3. I gave up with JP 4.5 and downloaded a JP 4.4 image. I was trying to follow the deep stream “How to” and it failed with the first attempt to install the library dependencies (top 1 in the list). So I tried to update/upgrade the system from console, which ended up in an error telling me, that the “nvidia-l4t-bootloader package post-installer” failed.

  4. Right now I have re-flashed the card with JP 4.4 again.

I’m inclined to say “Thanks, Obama” and my confidence in DeepStream is shaken for now.