Problem with two USB cameras

Hi,

I’m having trouble getting two USB cameras to work on a Jetson Nano.
I am using the formula to capture the image:

’ v4l2src device=/dev/video0’
’ io-mode=2! image/jpeg,’
’ width=(int)640, height=(int)480, ’
’ framerate=30/1 ! ’
’ nvv4l2decoder mjpeg=1 ! ’
’ nvvidconv ! videoflip method=0 ! video/x-raw,format=BGRx !’
’ videoconvert ! video/x-raw, format=BGR !’
’ appsink’

When I try to retrieve the image via opencv from only one camera, everything works fine.
When I try to download the image from two cameras through opencv I get an error:

jetson@jetson-desktop:~$ /usr/bin/python3 /home/jetson/Desktop/test_4_cameras.py
Opening in BLOCKING MODE
*Opening in BLOCKING MODE *
*NvMMLiteOpen : Block : BlockType = 277 *
*NVMEDIA: Reading vendor.tegra.display-size : status: 6 *
*NvMMLiteBlockCreate : Block : BlockType = 277 *
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=1, value=27, duration=-1
pening in BLOCKING MODE
*pening in BLOCKING MODE *
Opening in BLOCKING MODE
Segmentation fault (core dumped)

It uses python multithreading.

When I tried to open the image using command:

gst-launch-1.0 v4l2src device=/dev/video2 ! xvimagesink

for two cameras, in two terminals, the image from the first open camera is frozen when the image from the second camera is opened.

Is it possible to run two (or more) USB cameras? Or has anyone had a similar problem or knows how it should be solved?

Version: JetPack 4.5.1

Cameras used:
Arducam 4K 8MP IMX219:

Video formats:

jetson@jetson-desktop:~$ v4l2-ctl -d /dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT

Index       : 0
Type        : Video Capture
Pixel Format: 'YUYV'
Name        : YUYV 4:2:2
    Size: Discrete 3264x2448
        Interval: Discrete 0.500s (2.000 fps)
    Size: Discrete 2592x1944
        Interval: Discrete 0.500s (2.000 fps)
    Size: Discrete 2048x1536
        Interval: Discrete 0.500s (2.000 fps)
    Size: Discrete 1920x1080
        Interval: Discrete 0.200s (5.000 fps)
    Size: Discrete 1600x1200
        Interval: Discrete 0.200s (5.000 fps)
    Size: Discrete 1280x960
        Interval: Discrete 0.200s (5.000 fps)
    Size: Discrete 1280x720
        Interval: Discrete 0.067s (15.000 fps)
    Size: Discrete 800x600
        Interval: Discrete 0.050s (20.000 fps)
    Size: Discrete 640x480
        Interval: Discrete 0.050s (20.000 fps)

Index       : 1
Type        : Video Capture
Pixel Format: 'MJPG' (compressed)
Name        : Motion-JPEG
    Size: Discrete 3264x2448
        Interval: Discrete 0.067s (15.000 fps)
    Size: Discrete 2592x1944
        Interval: Discrete 0.067s (15.000 fps)
    Size: Discrete 2048x1536
        Interval: Discrete 0.067s (15.000 fps)
    Size: Discrete 1920x1080
        Interval: Discrete 0.033s (30.000 fps)
    Size: Discrete 1600x1200
        Interval: Discrete 0.033s (30.000 fps)
    Size: Discrete 1280x960
        Interval: Discrete 0.033s (30.000 fps)
    Size: Discrete 1280x720
        Interval: Discrete 0.033s (30.000 fps)
    Size: Discrete 800x600
        Interval: Discrete 0.033s (30.000 fps)
    Size: Discrete 640x480
        Interval: Discrete 0.033s (30.000 fps)

I also tried on another Creative VF0700 camera - the problem also occurs.

Among other things, I found this tutorial:

But I have trouble understanding how I should apply the given solution to my problem.

I finally was able to make 3 cams with 640 x 480 run, but only after 5 (!) attempts to identify a set of 3 cams, which would do it. And I used the Deepstream SDK for this in the end. This is my camera brand:

And you can definitely not use YUV as input, not sure how it works with H.264 (fearing increased latency), but MJPEG works.

gst-launch-1.0 v4l2src device=/dev/video0 ! "image/jpeg,width=640,height=480" ! jpegdec ! videoconvert ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! m.sink_0 nvstreammux name=m batch-size=1 width=640 height=480 ! nvinfer config-file-path=./config.txt ! nvtracker tracker-width=640 tracker-height=480 ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_nvdcf.so ! nvdsosd ! nvegltransform bufapi-version=true ! nveglglessink qos=false async=false sync=false

Run this N times on your Nano with varying /dev/videoN once you have JP4.5 and DeepStream 5.0x SDK installed.

Play with cluster_mode and also with the pre-cluster-threshold.

The ./config.txt is like so and expects to have the model files in the same directory. You need to copy them from the DeepStream model directories.

#
# Following properties are always recommended:
#   batch-size(Default=1)
#
# Other optional properties:
#   net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
#   model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
#   mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),
#   custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.

#
# DEPRECATED. FOR USE IN SCRIPTS ONLY
# Settings imported to config.yaml
#

[property]
workspace-size=600
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=./models/primary-detector-nano/resnet10.caffemodel
proto-file=./models/primary-detector-nano/resnet10.prototxt
labelfile-path=./models/primary-detector-nano/labels.txt
model-engine-file=./models/primary-detector-nano/resnet10.caffemodel_b3_gpu0_fp16.engine
force-implicit-batch-dim=1
batch-size=3
network-mode=2
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
#scaling-filter=0
#scaling-compute-hw=0
cluster-mode=3



[class-attrs-all]
pre-cluster-threshold=0.5
eps=0.2
group-threshold=1

HTH

Hi,
For OpenCV, generally we run the case like:
Doesn't work nvv4l2decoder for decoding RTSP in gstreamer + opencv - #3 by DaneLLL
For two USB cameras, please try to have two cv2.VideoCapture() like:

cap0 = cv2.VideoCapture("v4l2src device-/dev/video0 ! ...")
cap1 = cv2.VideoCapture("v4l2src device-/dev/video1 ! ...")

foreverneilyoung thanks for the reply!

Using Deepstream as per your advice I was able to run a preview from 3 USB cameras, in 3 separate consoles.

DaneLLL thank you for your reply!

In the program code I use a formula like you wrote.

However, more problems have arisen:
I tried to increase the resolution of the transmission, but got errors:

jetson@jetson-desktop:~/test$ gst-launch-1.0 v4l2src device=/dev/video2 ! “image/jpeg,width=960,height=544” ! jpegdec ! videoconvert ! nvvideoconvert ! “video/x-raw(memory:NVMM)” ! m.sink_0 nvstreammux name=m batch-size=1 width=960 height=544 ! nvinfer config-file-path=./config.txt ! nvtracker tracker-width=960 tracker-height=544 ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_nvdcf.so ! nvdsosd ! nvegltransform bufapi-version=true ! nveglglessink qos=false async=false sync=false
Setting pipeline to PAUSED …

*Using winsys: x11 *
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_nvdcf.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvDCF] Initialized
0:00:12.295260202 11856 0x5580070660 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/home/jetson/test/models/Primary_Detector_Nano/resnet10.caffemodel_b3_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 3
*0 INPUT kFLOAT input_1 3x272x480 *
*1 OUTPUT kFLOAT conv2d_bbox 16x17x30 *
*2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x17x30 *

0:00:12.295514424 11856 0x5580070660 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /home/jetson/test/models/Primary_Detector_Nano/resnet10.caffemodel_b3_gpu0_fp16.engine
0:00:12.318342465 11856 0x5580070660 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:./config.txt sucessfully
Pipeline is live and does not need PREROLL …
Got context from element ‘eglglessink0’: gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
ERROR: pipeline doesn’t want to preroll.
Setting pipeline to PAUSED …
Setting pipeline to READY …
[NvDCF] De-initialized
Setting pipeline to NULL …
Freeing pipeline …

or

jetson@jetson-desktop:~/test$ gst-launch-1.0 v4l2src device=/dev/video2 ! “image/jpeg,width=1920,height=1080” ! jpegdec ! videoconvert ! nvvideoconvert ! “video/x-raw(memory:NVMM)” ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvinfer config-file-path=./config.txt ! nvtracker tracker-width=1920 tracker-height=1080 ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_nvdcf.so ! nvdsosd ! nvegltransform bufapi-version=true ! nveglglessink qos=false async=false sync=false
Setting pipeline to PAUSED …

*Using winsys: x11 *
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_nvdcf.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF

!![ERROR] tracker-height(1080) must be a multiple of 32
An exception occurred. tracker-height(1080) must be a multiple of 32
terminate called after throwing an instance of ‘std::exception’
what(): std::exception
Aborted (core dumped)

How should the resolution be increased?

Another problem:
How do I read the camera image in the program code to then process it? Can this be done with OpenCV or should it be done somehow else?

I tried to retrieve the image with the command:

video_capture = cv2.VideoCapture( ‘v4l2src device=/dev/video2 ! “image/jpeg,width=640,height=480” ! jpegdec ! videoconvert ! nvvideoconvert ! “video/x-raw(memory:NVMM)” ! m.sink_0 nvstreammux name=m batch-size=1 width=640 height=480 ! nvinfer config-file-path=config.txt ! nvtracker tracker-width=640 tracker-height=480 ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_nvdcf.so ! nvdsosd ! nvegltransform bufapi-version=true ! nveglglessink qos=false async=false sync=false’ , cv2.CAP_GSTREAMER)

This resulted in an error:

jetson@jetson-desktop:~$ /usr/bin/python3 /home/jetson/Desktop/test_4_cameras.py
*(python3:11451): GStreamer-CRITICAL *: 14:51:54.964: gst_element_make_from_uri: assertion ‘gst_uri_is_valid (uri)’ failed
*(python3:11451): GStreamer-CRITICAL : 14:51:55.156: gst_element_make_from_uri: assertion ‘gst_uri_is_valid (uri)’ failed
Failed to load config file: No such file or directory
*** ERROR: <gst_nvinfer_parse_config_file:1260>: failed

[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (711) open OpenCV | GStreamer warning: Error opening bin: syntax error
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:656 Failed to create CaptureSession
Segmentation fault (core dumped)

Hi,
For running in OpenCV, you would need to hook with appsink as demonstrated in the sample. If your usecase is video preview, you may run gst-launch-1.0 command.

“Internal data stream error” is usually the incarnation of a USB bus congestion. The USB 2 bus is simply overloaded. The advice is to use USB 3 cams, but I never tried.

*!![ERROR] tracker-height(1080) must be a multiple of 32*
*An exception occurred. tracker-height(1080) must be a multiple of 32*

This error is self-explanatory. Your chosen height 1080 is not a multiple of 32